Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Post DUSC Instructor thoughts #24

Open
DimmestP opened this issue Nov 15, 2023 · 1 comment
Open

Post DUSC Instructor thoughts #24

DimmestP opened this issue Nov 15, 2023 · 1 comment

Comments

@DimmestP
Copy link

Thoughts after DUSC workshop 15/11/23:

  • Using a different dataset that has a more even spread (50:50) between the two classes would make the bias-variance tradeoff clearer
  • Also using a non-medical dataset would expand the usability
  • Generally needs more programming tasks. If this lesson follows on from the intro to ML course then you can:
  1. Set the data pre-processing as a task
  2. Get learners to determine accuracy across test/training data from the get go
  3. Set a general task at the end to allow user to train on more or on different data types
  • The course really could do with highlighting the benefits of random forests and gradient boosting. This can only be done by adding more features sooner.
  • Reduce the amount of plotting. It effective early on to visualise decision trees but not effective and time consuming when comparing later models.
  • Perhaps ignore gradient boosting entirely. It is skimmed over so fast it doesn't convey any of the benefits or differences over random forests.
  • to show the power of random forests try running the model on highly correlated features
  • Ideally the code should not be continually renaming the mdl variable, but create new variables for each model to help comparison
@tompollard
Copy link
Collaborator

This is great, thanks for the helpful feedback @DimmestP. I'll try to find some time to think about how the points can be addressed (and would welcome pull requests in the meantime!).

Some really quick thoughts:

Also using a non-medical dataset would expand the usability

I have mixed feelings about switching to a non-medical dataset (though I admit partly because of my own bias towards health data!). Wouldn't any dataset we choose have some kind of topic? I dislike "toy datasets" like Iris etc, so I'd be happy to switch but preferably to something interesting.

Generally needs more programming tasks.

Agreed, definitely more work needed here. I intentionally tried to reduce time spent on data pre-processing because it is covered in an earlier workshop, but I agree that evaluation, tasks, etc would be good topics.

The course really could do with highlighting the benefits of random forests and gradient boosting. This can only be done by adding more features sooner.

For me this is a tough one. I have found the vizualisation aspect of the workshop to be important, and it's not ideal that the ability to visualize models diminishes as number of features increases.

Ideally I'd like it if we could (1) keep visualization and (2) work out how to incorporate more features when needed (e.g. to demonstrate improved performance).

Perhaps ignore gradient boosting entirely. It is skimmed over so fast it doesn't convey any of the benefits or differences over random forests.

I agree the gradient boosting section needs work. I'd like to keep if possible, and add more detail.

At this point in the workshop, I usually take people to PubMed and point out some of the papers that have been published on this dataset using XGBoost. Not that they are exciting papers, but that prior to the workshop I think many people would believe those papers were doing something special.

Ideally the code should not be continually renaming the mdl variable, but create new variables for each model to help comparison

Definitely, there are a bunch of things like this that need cleaning up!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants