Replies: 3 comments 1 reply
-
Hi @ggiavelli , thank you for your interest in TornadoVM. I am not sure I understood your question. The calls For instance: ts.streamIn(a, b)
.task("task1", ... )
.task("task2", ...)
.task("taskN", ...)
.streamOut(result1, result2, resultN);
ts.execute();
ts.execute();
... However, there are strategies in TornadoVM to keep the updated data on the device without sending back to the host. This strategy assumes data are not manipulated by the host before executing a task-schedule again. To do so, you will do something like this: ts..task("task1", ... )
.task("task2", ...)
.task("taskN", ...)
ts.execute();
ts.execute();
## Force copy out of the variables of interest
ts.syncObjects(result1, result2, ..., resultN); Note that For your question:
TornadoVM supports a subset of Java. For objects support, TornadoVM can run the supported types (primitive arrays, vector types and matrix types). Having said this, however, the TornadoVM JIT Compiler will also inline all objects passed in order to analyze their containing primitive types or object supported types. Nevertheless, we do not recommend this way since it is still very experimental. For this reason, we recommend using arrays of primitives or the supported types: All supported types here:
What do you mean here? Could you expand on it? |
Beta Was this translation helpful? Give feedback.
-
*Question: Does it run under windows 10 as well?*
Thanks. Really appreciate the quick reply.
This was useful to get started. I will have to experiment with fully
serializable objects and see how that goes. Great job with the library it
does seem to be a step up from working with opencl directly.
In a connectionist architecture there is a neuron which has a state value,
an activation function, a propagation function, and about 16 connections to
other neuronal units. Each connection also has a weight and may have an
activation function. So if you stick to a 2D representation (but most
likely things would evaluate in 3D) the challenge is from a parallel
standpoint that you have one neuronal unit and the goal is to determine its
central value. To do this then you must sum the connections to 16 other
neuronal units. However, the entirety of all the neurons can be quite large
so you dont want to pass in that large of a structure.
I believe the solution would be to maintain the array of neurons outside of
the parallelism with synchronized access and then before passing in the
neuron for parallel compute you would simply fetch an array as well which
would correspond to the inputs.
The last complexity is that you want the order of processing to be plane by
plane. So if the neural structure is 8 across and 12 down, then you want
all 8 to evaluate before evaluating the next row of 8 below. I believe that
should be simply a matter of how you construct the loops as they can be out
of order within one. But the outer loop which then increments the Y
position, that might not be parallelable as you may get execution tasks
from a prior row still executing. As they would not then have their core
values set yet, this could produce errors.
Essentially what has happened is the dim low IQ savages at google pushed a
model of neural networks designed for image processing. These luddites have
utterly corrupted EVERY LAST API and LIBRARY so that it's all matrix tensor
functions. Worse, its high jibberish to extend functions in C. Final horror
these libraries are so restrictive even with their so called ability to
customize that they cannot break out of this dim savage low IQ luddite
image processing model. It's derivation comes from that idiot Hinton and
Sutskever at google.
So after huge frustrations with THE ENTIRE FRIGGIN WORLD DOING IT WRONG I
am hopeful to do something that can create a proper neural modeled universe
of different neural types, different connection strategies and functions.
Our first product is morphological NLP enterprise search. We built the
fastest most precise search engine in the world. No one cared. sigh.
The second product is our cognitive engine, which is a bit of a regular
conversational language interface to information but through neural
activations and propagations not information retrieval techniques. We built
a large conversational engine based on our advanced morphological engine
and were not happy that it was advanced enough to really be like how humans
solve problems.
Our next product, where we ran into challenges with the frameworks is
NeuralAlert, a warning/insurance system for crypto and commodity stock
trading but with application to corporate data as well. We wanted to do
time series functions such as acceleration calculations and to do advanced
connection architectures. NOPE nothing does it. At that point we began to
look at a way to do it ourselves but all the OpenCL / Cuda / Ibm 8 tech
edition jvm were all just not really written to make that so easy. The
reason Keras and Tensorflow have such limited customizations is because of
the low level coding to do so and so they build in many functions they
think should cover most situations - if you are a dumb google idiot.
So we arrived at tornado and think although its a fair bit of kit to make
all this happen, would be massively cool and a technical advantage.
I'll start in on it.
thanks
Gia
…On Tue, Jan 4, 2022 at 1:49 AM Juan Fumero ***@***.***> wrote:
Hi @ggiavelli <https://github.com/ggiavelli> , thank you for your
interest in TornadoVM.
I am not sure I understood your question.
The calls streamIn and streamOut will force a copy (from the host to the
device and vice versa) every time the execute method is invoked.
For instance:
ts.streamIn(a, b)
.task("task1", ... )
.task("task2", ...)
.task("taskN", ...)
.streamOut(result1, result2, resultN);
ts.execute();
ts.execute();...
However, there are strategies in TornadoVM to keep the updated data on the
device without sending back to the host. This strategy assumes data are not
manipulated by the host before executing a task-schedule again. To do so,
you will do something like this:
ts..task("task1", ... )
.task("task2", ...)
.task("taskN", ...)
ts.execute();
ts.execute();
## Force copy out of the variables of interest
ts.syncObjects(result1, result2, ..., resultN);
Note that syncObjects is a blocking call, and it will update the result
in the variables passed as arguments. Those variables must be already
present when invoking some of the tasks.
Here you can find an example:
https://github.com/beehive-lab/TornadoVM/blob/d0f450642603bed9721452acd824f580fd582dac/benchmarks/src/main/java/uk/ac/manchester/tornado/benchmarks/nbody/NBodyTornado.java#L119
For your question:
but I am working on a connectionist architecture where I want to pass in
an array of class that has variables internal to the class, have it
compute, and then remain updated
TornadoVM supports a subset of Java. For objects support, TornadoVM can
run the supported types (primitive arrays, vector types and matrix types).
Having said this, however, the TornadoVM JIT Compiler will also inline all
objects passed in order to analyze their containing primitive types or
object supported types. Nevertheless, we do not recommend this way since it
is still very experimental. For this reason, we recommend using arrays of
primitives or the supported types:
All supported types here:
https://github.com/beehive-lab/TornadoVM/tree/master/tornado-api/src/main/java/uk/ac/manchester/tornado/api/collections/types
I was also wondering how to have a global memory array which the passed in
class would update
What do you mean here? Could you expand on it?
—
Reply to this email directly, view it on GitHub
<#172 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABMIZW2FYQBISKI3OOMVEVTUUKRCNANCNFSM5LF7OYTA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
---------------------------
Gianna Giavelli, president
www.Noonean.com <http://www.noonean.com/>
tutto ciò che pensi in un sogno diventerà realtà
|
Beta Was this translation helpful? Give feedback.
-
@ggiavelli , In case of interest, the public deliverable is here: |
Beta Was this translation helpful? Give feedback.
-
I notice that you streamout a variable, but I am working on a connectionist architecture where I want to pass in an array of class that has variables internal to the class, have it compute, and then remain updated. Does this require streamingOUT each class then reading it back in as serialized or some such strategy?
I was also wondering how to have a global memory array which the passed in class would update
thanks!
Beta Was this translation helpful? Give feedback.
All reactions