Skip to content

Store Metadata as Values #99

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 100 commits into
base: latest
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
100 commits
Select commit Hold shift + click to select a range
e631c6a
Dataset updates
eeddy Sep 13, 2024
0a89aa5
Updates
eeddy Sep 16, 2024
c7a6a12
Added Grab Myo
eeddy Sep 16, 2024
cd7f61d
Updates
eeddy Sep 16, 2024
4ea2ebc
Changed pathing
eeddy Sep 16, 2024
9c1f741
Update OneSubjectEMaGerDataset to new format
cbmorrell Sep 16, 2024
7573004
Add split parameter to prepare_data
cbmorrell Sep 16, 2024
642d26c
Updates
eeddy Sep 16, 2024
a8759c8
Updates
eeddy Sep 16, 2024
0e830bd
Fixed Grab Myo
eeddy Sep 16, 2024
48bdd44
Added resp to EPN
eeddy Sep 17, 2024
fff712d
Updated the data handler to run faster
eeddy Sep 17, 2024
dcaa006
Made them all parse fast
eeddy Sep 17, 2024
4267971
Sped up window parsing
eeddy Sep 18, 2024
e8284c7
Fixed data handler
eeddy Sep 18, 2024
203cbc8
Updated ref
eeddy Sep 18, 2024
30fef47
Made faster
eeddy Sep 18, 2024
ebd500b
Fixed
eeddy Sep 18, 2024
d8789d9
Updated
eeddy Sep 19, 2024
d74f08b
Updates
eeddy Sep 19, 2024
580094d
Updates
eeddy Sep 19, 2024
06e8dec
Updates
eeddy Sep 19, 2024
a9fc178
Added cropping
eeddy Sep 19, 2024
fa65513
Updated to crop better
eeddy Sep 20, 2024
6ed821a
Updates
eeddy Sep 24, 2024
bbf80c4
Undo
eeddy Sep 24, 2024
a65a8e9
Hyser dataset
cbmorrell Sep 24, 2024
31517ae
Updated libemg
eeddy Sep 25, 2024
0ed0e30
Fix zoom logical error
cbmorrell Sep 25, 2024
854e307
Merge branch 'dataset_updates' of https://github.com/LibEMG/libemg in…
cbmorrell Sep 25, 2024
44c7df4
Add regex filter packaging and .hea support to FilePackager
cbmorrell Sep 25, 2024
a7916e2
Add check in regex package function
cbmorrell Sep 25, 2024
12453e2
Fix Hyser1DOF
cbmorrell Sep 25, 2024
4f62584
Added CI dataset
eeddy Sep 26, 2024
981352f
Updates
eeddy Sep 26, 2024
2d58dae
Updates
eeddy Sep 26, 2024
6b609b8
UpdaTes
eeddy Sep 27, 2024
45e5abf
added limb position
eeddy Sep 27, 2024
275dd36
radman->radmand
ECEEvanCampbell Sep 27, 2024
a0a0e1e
added h5py req
ECEEvanCampbell Sep 27, 2024
e5d7a8e
added kaufmannMD
ECEEvanCampbell Sep 27, 2024
d1b5225
created kaufmann class
ECEEvanCampbell Sep 27, 2024
218e002
added submodules to _dataset
ECEEvanCampbell Sep 27, 2024
f3cd6b0
Updated myodisco
eeddy Sep 30, 2024
3c41d00
Updates
eeddy Sep 30, 2024
a3d4b2e
added h5py
ECEEvanCampbell Oct 1, 2024
1640821
HyserNDOF and HyserRandom Classes
cbmorrell Oct 1, 2024
66704a9
Merge branch 'dataset_updates' of https://github.com/LibEMG/libemg in…
cbmorrell Oct 1, 2024
050598b
Add type hint to RegexFilter
cbmorrell Oct 2, 2024
3242bb5
Handle single values from MetadataFetcher
cbmorrell Oct 2, 2024
70d8019
Hyser PR Dataset
cbmorrell Oct 2, 2024
ffb9564
Remove subject 10 from random task dataset
cbmorrell Oct 2, 2024
a4b2d3c
Rename Hyser to _Hyser
cbmorrell Oct 2, 2024
dbbd5fe
Hyser documentation
cbmorrell Oct 2, 2024
aa032c9
Don't do any processing on the dataset
eeddy Oct 7, 2024
de89e6d
Add NinaproDB8
cbmorrell Oct 11, 2024
898e59c
Merge branch 'dataset_updates' of https://github.com/LibEMG/libemg in…
cbmorrell Oct 11, 2024
cce14fc
Add note to NinaproDB8
cbmorrell Oct 11, 2024
0e9cf18
Add OneSubjectEMaGerDataset import to datasets.py
cbmorrell Oct 11, 2024
85b8a3a
Add OneSubjectEMaGerDataset to dataset list
cbmorrell Oct 11, 2024
58bc45c
Fix parse_windows for 2D metadata
cbmorrell Oct 11, 2024
f7e6178
Reimplement NinaPro cyberglove data
cbmorrell Oct 11, 2024
86e0aac
Allow empty strings in RegexFilter
cbmorrell Oct 17, 2024
20c585b
Properly handle cyberglove data
cbmorrell Oct 17, 2024
afed8a7
Merge branch 'dataset_updates' of https://github.com/LibEMG/libemg in…
ECEEvanCampbell Oct 17, 2024
acbbf90
added tmr data
ECEEvanCampbell Oct 18, 2024
f8dc05b
Updates
eeddy Oct 21, 2024
d0c1fe7
Updated logging
eeddy Oct 22, 2024
64cea6e
Updates
eeddy Oct 22, 2024
d31aa87
Updates
eeddy Oct 22, 2024
c1d2f06
Add UserComplianceDataset
cbmorrell Oct 22, 2024
16ead39
Updates
eeddy Oct 25, 2024
3c50c7b
Updates
eeddy Oct 25, 2024
b9edf73
Convert labels field to classes in HyserPR
cbmorrell Oct 28, 2024
a0cce07
added CIIL_WS. Fixed dataset exist check for regression & WS
ECEEvanCampbell Oct 28, 2024
137c5b2
initial commit for CIIL_WS
ECEEvanCampbell Oct 28, 2024
d9def07
added onedrive download method
ECEEvanCampbell Oct 28, 2024
ecd68cf
added onedrive download method
ECEEvanCampbell Oct 28, 2024
b805527
added one drive downloader
ECEEvanCampbell Oct 28, 2024
12a0b53
added arguments for unzip and clean
ECEEvanCampbell Oct 28, 2024
4c322de
now downloads
ECEEvanCampbell Oct 28, 2024
5105af3
Fixed one site bio
eeddy Oct 31, 2024
d6afd04
Hyser labels fix
cbmorrell Oct 31, 2024
bae186c
Add subjects to Hyser classes
cbmorrell Oct 31, 2024
62a85da
Continuous transitions debugging
cbmorrell Oct 31, 2024
36481e7
Add subjects to OneSubjectEMaGerDataset
cbmorrell Oct 31, 2024
e6f7a08
Evaluate method fixes
cbmorrell Oct 31, 2024
9fd0784
Fixed continuous
eeddy Oct 31, 2024
64335df
Fix subject indexing with Hyser
cbmorrell Oct 31, 2024
279ffca
Handle default subject values for Hyser datasets
cbmorrell Oct 31, 2024
61ef9c0
Fixed DB8
eeddy Oct 31, 2024
58896ce
Hyser missing subject fixes
cbmorrell Oct 31, 2024
ebf73e4
added onesiteBP
ECEEvanCampbell Oct 31, 2024
c8ab655
Fixed continuous transitions
eeddy Oct 31, 2024
fbf0581
Store metadata as values
cbmorrell Oct 31, 2024
79d594c
Add return_value parameter to RegexFilter
cbmorrell Oct 31, 2024
bba9694
Try to cast to number when grabbing metadata
cbmorrell Oct 31, 2024
2307e12
Replace list comprehension with mask operation
cbmorrell Oct 31, 2024
d891f28
Handle single element arrays
cbmorrell Oct 31, 2024
500aab3
Fixed Hyser workarounds
cbmorrell Oct 31, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 17 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -45,4 +45,20 @@ test_*.py
*.csv
.vscode/*
test_delsys_api.py
resources/
resources/
*.csv
*.txt
ContinuousTransitions/*
FORS-EMG/*
MyoDisCo/*
NinaProDB1/*
*.zip
libemg/_datasets/__pycache__/*
CIILData/*
EMGEPN612.pkl
OneSubjectMyoDataset/
_3DCDataset/
ContractionIntensity/
CIILData/
*.pkl
LimbPosition/
4 changes: 4 additions & 0 deletions dataset_tryout.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
from libemg.datasets import *

accs = evaluate('LDA', 300, 100, feature_list=['MAV','SSC','ZC','WL'], included_datasets=['FougnerLP'], save_dir='')
print('\n' + str(accs))
45 changes: 45 additions & 0 deletions libemg/_datasets/_3DC.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
from libemg._datasets.dataset import Dataset
from libemg.data_handler import OfflineDataHandler, RegexFilter

class _3DCDataset(Dataset):
def __init__(self, dataset_folder="_3DCDataset/"):
Dataset.__init__(self,
1000,
10,
'3DC Armband (Prototype)',
22,
{0: "Neutral", 1: "Radial Deviation", 2: "Wrist Flexion", 3: "Ulnar Deviation", 4: "Wrist Extension", 5: "Supination", 6: "Pronation", 7: "Power Grip", 8: "Open Hand", 9: "Chuck Grip", 10: "Pinch Grip"},
'8 (4 Train, 4 Test)',
"The 3DC dataset including 11 classes.",
"https://doi.org/10.3389/fbioe.2020.00158")
self.url = "https://github.com/libemg/3DCDataset"
self.dataset_folder = dataset_folder

def prepare_data(self, split = False, subjects_values = None, sets_values = None, reps_values = None,
classes_values = None):
if subjects_values is None:
subjects_values = [str(i) for i in range(1,23)]
if sets_values is None:
sets_values = ["train", "test"]
if reps_values is None:
reps_values = ["0","1","2","3"]
if classes_values is None:
classes_values = [str(i) for i in range(11)]

print('\nPlease cite: ' + self.citation+'\n')
if (not self.check_exists(self.dataset_folder)):
self.download(self.url, self.dataset_folder)

regex_filters = [
RegexFilter(left_bound = "/", right_bound="/EMG", values = sets_values, description='sets'),
RegexFilter(left_bound = "_", right_bound=".txt", values = classes_values, description='classes'),
RegexFilter(left_bound = "EMG_gesture_", right_bound="_", values = reps_values, description='reps'),
RegexFilter(left_bound="Participant", right_bound="/",values=subjects_values, description='subjects')
]
odh = OfflineDataHandler()
odh.get_data(folder_location=self.dataset_folder, regex_filters=regex_filters, delimiter=",")
data = odh
if split:
data = {'All': odh, 'Train': odh.isolate_data("sets", [0], fast=True), 'Test': odh.isolate_data("sets", [1], fast=True)}

return data
17 changes: 17 additions & 0 deletions libemg/_datasets/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from libemg._datasets import _3DC
from libemg._datasets import ciil
from libemg._datasets import continous_transitions
from libemg._datasets import dataset
from libemg._datasets import emg_epn612
from libemg._datasets import fors_emg
from libemg._datasets import fougner_lp
from libemg._datasets import grab_myo
from libemg._datasets import hyser
from libemg._datasets import intensity
from libemg._datasets import kaufmann_md
from libemg._datasets import myodisco
from libemg._datasets import nina_pro
from libemg._datasets import one_subject_emager
from libemg._datasets import one_subject_myo
from libemg._datasets import radmand_lp
from libemg._datasets import tmr_shirleyryanabilitylab
148 changes: 148 additions & 0 deletions libemg/_datasets/ciil.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
from libemg._datasets.dataset import Dataset
from libemg.data_handler import OfflineDataHandler, RegexFilter, FilePackager
from pathlib import Path



class CIIL_MinimalData(Dataset):
def __init__(self, dataset_folder='CIILData/'):
Dataset.__init__(self,
200,
8,
'Myo Armband',
11,
{0: 'Close', 1: 'Open', 2: 'Rest', 3: 'Flexion', 4: 'Extension'},
'1 Train (1s), 15 Test',
"The goal of this Myo dataset is to explore how well models perform when they have a limited amount of training data (1s per class).",
'https://ieeexplore.ieee.org/abstract/document/10394393')
self.url = "https://github.com/LibEMG/CIILData"
self.dataset_folder = dataset_folder

def prepare_data(self, split = False):
print('\nPlease cite: ' + self.citation+'\n')
if (not self.check_exists(self.dataset_folder)):
self.download(self.url, self.dataset_folder)

subfolder = 'MinimalTrainingData'
subjects = [str(i) for i in range(0, 11)]
classes_values = [str(i) for i in range(0,5)]
reps_values = ["0","1","2"]
sets = ["train", "test"]
regex_filters = [
RegexFilter(left_bound = "/", right_bound="/", values = sets, description='sets'),
RegexFilter(left_bound = "/subject", right_bound="/", values = subjects, description='subjects'),
RegexFilter(left_bound = "R_", right_bound="_", values = reps_values, description='reps'),
RegexFilter(left_bound = "C_", right_bound=".csv", values = classes_values, description='classes')
]
odh = OfflineDataHandler()
odh.get_data(folder_location=self.dataset_folder + '/' + subfolder, regex_filters=regex_filters, delimiter=",")

data = odh
if split:
data = {'All': odh, 'Train': odh.isolate_data("sets", [0], fast=True), 'Test': odh.isolate_data("sets", [1], fast=True)}

return data

class CIIL_ElectrodeShift(Dataset):
def __init__(self, dataset_folder='CIILData/'):
Dataset.__init__(self,
200,
8,
'Myo Armband',
21,
{0: 'Close', 1: 'Open', 2: 'Rest', 3: 'Flexion', 4: 'Extension'},
'5 Train (Before Shift), 8 Test (After Shift)',
"An electrode shift confounding factors dataset.",
'https://link.springer.com/article/10.1186/s12984-024-01355-4')
self.url = "https://github.com/LibEMG/CIILData"
self.dataset_folder = dataset_folder

def prepare_data(self, split = False):
print('\nPlease cite: ' + self.citation+'\n')
if (not self.check_exists(self.dataset_folder)):
self.download(self.url, self.dataset_folder)

subfolder = 'ElectrodeShift'
subjects = [str(i) for i in range(0, 21)]
classes_values = [str(i) for i in range(0,5)]
reps_values = ["0","1","2","3","4"]
sets = ["training", "trial_1", "trial_2", "trial_3", "trial_4"]
regex_filters = [
RegexFilter(left_bound = "/", right_bound="/", values = sets, description='sets'),
RegexFilter(left_bound = "/subject", right_bound="/", values = subjects, description='subjects'),
RegexFilter(left_bound = "R_", right_bound="_", values = reps_values, description='reps'),
RegexFilter(left_bound = "C_", right_bound=".csv", values = classes_values, description='classes')
]
odh = OfflineDataHandler()
odh.get_data(folder_location=self.dataset_folder + '/' + subfolder, regex_filters=regex_filters, delimiter=",")

data = odh
if split:
data = {'All': odh, 'Train': odh.isolate_data("sets", [0], fast=True), 'Test': odh.isolate_data("sets", [1,2,3,4], fast=True)}

return data


class CIIL_WeaklySupervised(Dataset):
def __init__(self, dataset_folder='CIIL_WeaklySupervised/'):
Dataset.__init__(self,
1000,
8,
'OyMotion gForcePro+ EMG Armband',
16,
{0: 'Close', 1: 'Open', 2: 'Rest', 3: 'Flexion', 4: 'Extension'},
'30 min weakly supervised, 1 rep calibration, 14 reps test',
"A weakly supervised environment with sparse supervised calibration.",
'In Submission')
self.url = "https://unbcloud-my.sharepoint.com/:u:/g/personal/ecampbe2_unb_ca/EaABHYybhfJNslTVcvwPPwgB9WwqlTLCStui30maqY53kw?e=MbboMd"
self.dataset_folder = dataset_folder

def prepare_data(self, split = False):
print('\nPlease cite: ' + self.citation+'\n')
if (not self.check_exists(self.dataset_folder)):
self.download_via_onedrive(self.url, self.dataset_folder)

# supervised odh loading
subjects = [str(i) for i in range(0, 16)]
classes_values = [str(i) for i in range(0,5)]
reps_values = [str(i) for i in range(0,15)]
setting_values = [".csv", ""] # this is arbitrary to get a field that separates WS from S
regex_filters = [
RegexFilter(left_bound = "", right_bound="", values = setting_values, description='settings'),
RegexFilter(left_bound = "/S", right_bound="/", values = subjects, description='subjects'),
RegexFilter(left_bound = "R", right_bound=".csv", values = reps_values, description='reps'),
RegexFilter(left_bound = "C", right_bound="_R", values = classes_values, description='classes')
]
odh_s = OfflineDataHandler()
odh_s.get_data(folder_location=self.dataset_folder+"CIIL_WeaklySupervised/",
regex_filters=regex_filters,
delimiter=",")

# weakly supervised odh loading
subjects = [str(i) for i in range(0, 16)]
reps_values = [str(i) for i in range(3)]
setting_values = ["", ".csv"] # this is arbitrary to get a field that separates WS from S
regex_filters = [
RegexFilter(left_bound = "", right_bound="", values = setting_values, description='settings'),
RegexFilter(left_bound = "/S", right_bound="/", values = subjects, description='subjects'),
RegexFilter(left_bound = "WS", right_bound=".csv", values = reps_values, description='reps'),
]
metadata_fetchers = [
FilePackager(regex_filter=RegexFilter(left_bound="", right_bound="targets.csv", values=["_"], description="classes"),
package_function=lambda x, y: (x.split("WS")[1][0] == y.split("WS")[1][0]) and (Path(x).parent == Path(y).parent)
)
]
odh_ws = OfflineDataHandler()
odh_ws.get_data(folder_location=self.dataset_folder+"CIIL_WeaklySupervised/",
regex_filters=regex_filters,
metadata_fetchers=metadata_fetchers,
delimiter=",")

data = odh_s + odh_ws
if split:
data = {'All': data,
'Pretrain': odh_ws,
'Train': odh_s.isolate_data("reps", [0], fast=True),
'Test': odh_s.isolate_data("reps", list(range(1,15)), fast=True)}

return data
63 changes: 63 additions & 0 deletions libemg/_datasets/continous_transitions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
from libemg._datasets.dataset import Dataset
from libemg.data_handler import OfflineDataHandler
import h5py
import numpy as np

class ContinuousTransitions(Dataset):
def __init__(self, dataset_folder="ContinuousTransitions/"):
Dataset.__init__(self,
2000,
6,
'Delsys',
43,
{0: 'No Motion', 1: 'Wrist Flexion', 2: 'Wrist Extension', 3: 'Wrist Pronation', 4: 'Wrist Supination', 5: 'Hand Close', 6: 'Hand Open'},
'6 Training (Ramp), 42 Transitions (All combinations of Transitions) x 6 Reps',
"The testing set in this dataset has continuous transitions between classes which is a more realistic offline evaluation standard for myoelectric control.",
"https://ieeexplore.ieee.org/document/10254242")
self.dataset_folder = dataset_folder

def prepare_data(self, split = False):
print('\nPlease cite: ' + self.citation+'\n')
if (not self.check_exists(self.dataset_folder)):
print("Please download the dataset from: ") #TODO: Update
return

# Training ODH
odh_tr = OfflineDataHandler()
odh_tr.subjects = []
odh_tr.classes = []
odh_tr.extra_attributes = ['subjects', 'classes']

# Testing ODH
odh_te = OfflineDataHandler()
odh_te.subjects = []
odh_te.classes = []
odh_te.extra_attributes = ['subjects', 'classes']

for s_i, s in enumerate([2,3,4,5,6,7,8,9,10,11,12,13,14,15,17,18,19,20,21,22,23,25,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47]):
data = h5py.File('ContinuousTransitions/P' + f"{s:02}" + '.hdf5', "r")
cont_labels = data['continuous']['emg']['prompt'][()]
cont_labels = np.hstack([np.ones((1000)) * cont_labels[0], cont_labels[0:len(cont_labels)-1000]]) # Rolling about 0.5s as per Shri's suggestion
cont_emg = data['continuous']['emg']['signal'][()]
cont_chg_idxs = np.insert(np.where(cont_labels[:-1] != cont_labels[1:])[0], 0, -1)
cont_chg_idxs = np.insert(cont_chg_idxs, len(cont_chg_idxs), len(cont_emg))
for i in range(0, len(cont_chg_idxs)-1):
odh_te.data.append(cont_emg[cont_chg_idxs[i]+1:cont_chg_idxs[i+1]])
odh_te.classes.append(np.expand_dims(cont_labels[cont_chg_idxs[i]+1:cont_chg_idxs[i+1]]-1, axis=1))
odh_te.subjects.append(np.ones((len(odh_te.data[-1]), 1)) * s_i)

ramp_emg = data['ramp']['emg']['signal'][()]
ramp_labels = data['ramp']['emg']['prompt'][()]
r_chg_idxs = np.insert(np.where(ramp_labels[:-1] != ramp_labels[1:])[0], 0, -1)
r_chg_idxs = np.insert(r_chg_idxs, len(r_chg_idxs), len(ramp_emg))
for i in range(0, len(r_chg_idxs)-1):
odh_tr.data.append(ramp_emg[r_chg_idxs[i]+1:r_chg_idxs[i+1]])
odh_tr.classes.append(np.expand_dims(ramp_labels[r_chg_idxs[i]+1:r_chg_idxs[i+1]]-1, axis=1))
odh_tr.subjects.append(np.ones((len(odh_tr.data[-1]), 1)) * s_i)

odh_all = odh_tr + odh_te
data = odh_all
if split:
data = {'All': odh_all, 'Train': odh_tr, 'Test': odh_te}

return data
55 changes: 55 additions & 0 deletions libemg/_datasets/dataset.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
import os
from libemg.data_handler import OfflineDataHandler
from onedrivedownloader import download as onedrive_download
# this assumes you have git downloaded (not pygit, but the command line program git)

class Dataset:
def __init__(self, sampling, num_channels, recording_device, num_subjects, gestures, num_reps, description, citation):
# Every class should have this
self.sampling=sampling
self.num_channels=num_channels
self.recording_device=recording_device
self.num_subjects=num_subjects
self.gestures=gestures
self.num_reps=num_reps
self.description=description
self.citation=citation

def download(self, url, dataset_name):
clone_command = "git clone " + url + " " + dataset_name
os.system(clone_command)

def download_via_onedrive(self, url, dataset_name, unzip=True, clean=True):
onedrive_download(url=url,
filename = dataset_name,
unzip=unzip,
clean=clean)

def remove_dataset(self, dataset_folder):
remove_command = "rm -rf " + dataset_folder
os.system(remove_command)

def check_exists(self, dataset_folder):
return os.path.exists(dataset_folder)

def prepare_data(self, split = False):
pass

def get_info(self):
print(str(self.description) + '\n' + 'Sampling Rate: ' + str(self.sampling) + '\nNumber of Channels: ' + str(self.num_channels) +
'\nDevice: ' + self.recording_device + '\nGestures: ' + str(self.gestures) + '\nNumber of Reps: ' + str(self.num_reps) + '\nNumber of Subjects: ' + str(self.num_subjects) +
'\nCitation: ' + str(self.citation))

# given a directory, return a list of files in that directory matching a format
# can be nested
# this is just a handly utility
def find_all_files_of_type_recursively(dir, terminator):
files = os.listdir(dir)
file_list = []
for file in files:
if file.endswith(terminator):
file_list.append(dir+file)
else:
if os.path.isdir(dir+file):
file_list += find_all_files_of_type_recursively(dir+file+'/',terminator)
return file_list
Loading
Loading