Skip to content

Commit

Permalink
changed landmarks to environment variable, updated Dockerfile
Browse files Browse the repository at this point in the history
  • Loading branch information
Linardos committed Nov 12, 2021
1 parent f3c6cbf commit cf8da0e
Show file tree
Hide file tree
Showing 5 changed files with 10 additions and 7 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ docker run -it -v $DATA_PATH:/BFP/dataset -v $PWD/src:/BFP/src -e csv_path=/BFP/

BCDR:
```bash
DATA_PATH=/home/lidia/Datasets/BCDR/cropped/
DATA_PATH=/home/lidia-garrucho/datasets/BCDR/cropped/
CSV_FILENAME=None
DATA_LOADER_TYPE=bcdr

Expand Down
5 changes: 3 additions & 2 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,13 @@ RUN source fl_env/bin/activate
RUN pwd

COPY . /BFP/docker
RUN pip3 install -r requirements.txt
RUN pip3 install --timeout 1000 torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio===0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
RUN pip install -r requirements.txt
RUN pip3 install torch==1.10.0+cu113 torchvision==0.11.1+cu113 torchaudio===0.10.0+cu113 -f https://download.pytorch.org/whl/cu113/torch_stable.html
RUN apt-get install ffmpeg libsm6 libxext6 -y

RUN mkdir /BFP/dataset
ENV PYTHONPATH "${PYTHONPATH}:/BFP"
ENV landmarks "/BFP/src/preprocessing/optimam_train_hologic_landmarks.pth"

WORKDIR "/BFP/src"
#RUN python3 client.py
Expand Down
3 changes: 2 additions & 1 deletion src/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ def import_class(name):
# SERVER= os.getenv('server',"[::]:8080")
SERVER= os.getenv('server',"161.116.4.137:8080")
DATA_LOADER_TYPE= os.getenv('data_loader_type',"optimam") #env variable data_loader if not given default to optimam type dataloading

# Docker ip is: 172.17.0.3
print(f'Here dataset path {DATASET_PATH}')
print(f'Here csv path {CSV_PATH}')
Expand All @@ -50,7 +51,7 @@ def import_class(name):

# args = parser.parse_args()

DEVICE = torch.device("cuda") #if torch.cuda.is_available() else "cpu")
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
CRITERION = import_class(CONFIG['hyperparameters']['criterion'])

def load_data():
Expand Down
3 changes: 1 addition & 2 deletions src/config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ name: 'sanity_check' # use 4digit name tag to overwrite experiment with same par
hyperparameters:
num_epochs : 20
iteration_number: 8 # this will define the number of batches when doing federated (we want the same number of iterations)
batch_size : 4 # if 0, it will adapt to center
batch_size : 32 # if 0, it will adapt to center
test_batch_size : 10 #use a higher one for speed up if CUDA memory allows
lr : 0.01
early_stop_counter: 20 # number of iterations after performance stagnation before stopping
Expand All @@ -23,7 +23,6 @@ model:
in_ch: 3
out_ch: 1
linear_ch: 512
# out_ch: 1 #
early_layers_learning_rate: 1e-5 #10^-5, if set to 0 early layers will be frozen
seed: 42 # Particularly important for federated. Models will make no sense if we aggregate from different initializations
pretrained: # if we have our own weights
Expand Down
4 changes: 3 additions & 1 deletion src/data_loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,8 @@

image_ctr = 0

LANDMARKS = os.environ['landmarks']# CONFIG['paths']['landmarks']

# Handle DICOMs:
# "/home/kaisar/Datasets/InBreast/AllDICOMs"

Expand All @@ -65,7 +67,7 @@ def preprocess_one_image_OPTIMAM(image): # Read as nifti without saving
image = torch.from_numpy(rescaled_img).permute(2,0,1)

# Histogram Matching
landmarks_values = torch.load(HOME_PATH / CONFIG['paths']['landmarks'])
landmarks_values = torch.load(HOME_PATH / LANDMARKS)
apply_hist_stand_landmarks(image, landmarks_values)

paddedimg = torch.zeros(3,224,224)
Expand Down

0 comments on commit cf8da0e

Please sign in to comment.