Skip to content

Add multiple cpu cores and multiple Julia computers support #852

Closed

Description

From the students perspective, they do not have expensive NVidia gpu.
Some people even run Linux and Julia on Samsung Galaxy mobile phone via DeX.
(connect monitor, keyboard, mouse, and you have a PC)

Tensorflow was using all my cpu 12 cores when training the model.

Training and running models on CPU ensures economical and amazingly fast neural networks development.
Super expensive LLM approach is not wise for all use cases.

Julia Parallelism easily allows creating Julia computers cluster.

Training process on 10 computers where 2 of them have supported gpu would allow to speed up the process.

For example, all Lab hardware at the nights can train neural networks models.

Computers mostly stay unused, all the time.

This all goes together with genetic algorithms to model layers architecture, and tune various config parameters, even Dataset data formatting and sizes.

Here, gpu won't help.

10 Julia computers cluster would speed up genetic algorithms process significantly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions