-
Notifications
You must be signed in to change notification settings - Fork 149
[WIP]: Exploring removing <Float>
everywhere.
#465
Conversation
This change explores stuffing some extra typealiases in a protocol to obviate the need to specify `Foo<Float>` everywhere. Unfortunately, it appears that the typealiases don't override the imports when they're defined in another module. Disadvantages of this approach: it might make it more confusing for someone who is unfamiliar with this pattern to understand what is specifying the scalar parameter. Finally, in certain usages, typealiases even within the same module did not work, and had to use `Self.$TYPEALIAS` to get it to compile, even in the same module. Also, although I didn't play around with it much, the `Tensor` typealias also caused a number of problems in the typechecker.
let convolved = input.sequenced(through: conv1, pool1, conv2, pool2) | ||
return convolved.sequenced(through: flatten, fc1, fc2, fc3) | ||
} | ||
} | ||
|
||
// Can't move this to another module apparently... | ||
public protocol Foo { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to define public typealiases at the top-level (like TensorFlowNumeric
and friends), if we are to create convenience typealiases for Tensor
and various layers specialized to Scalar = Float
.
I think that is better than creating a public protocol whose only purpose is to provide convenience typealiases (for use by Layer
-conforming types).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming is a challenge though: finding names that are shorter and idiomatic in Swift.
I'm not sure I like the "F" abbreviation in TensorF
- I prefer the explicitness of Tensor<Float>
and how it looks similar to Tensor<Scalar>
generic code. I'm not sure there's precedent for an "F" abbreviation in the standard library: typealias SIMD2F = SIMD2<Float>
doesn't exist, for example.
A whole bunch of libraries using Swift for TensorFlow seem to define something like typealias TensorF = Tensor<Float>
though, so maybe there's sufficient demand for us to add it to tensorflow/swift-apis
? I'd still prefer for user libraries to define the typealias locally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would be better to define public typealiases at the top-level (like TensorFlowNumeric and friends), if we are to create convenience typealiases for Tensor and various layers specialized to Scalar = Float.
The idea I'm really going for is TypedModel
(better name wanted); you actually want to be generic across Scalar types, which is why you define an associatedtype
for the Scalar type, and then use that to pre-parameterize all the layers' generic parameters by that associated scalar type. (Then, you can specialize that associated type to Float
for the super easy case. This yields a nice progressive disclosure of complexity.) Said another way, you start by making a struct that conforms to FloatModel
. You use all the TensorFlow layer types and can ignore all the generics. Example:
struct MyFirstModel: FloatModel {
var conv = Conv2D(...) // this is typealias'd to `TensorFlow.Conv2D<Float>`
var flatten = Flatten()
var dense = Dense(...)
func callAsFunction(_ input: Tensor) -> Tensor { dense(flatten(conv(input))) }
}
If you want to step it up a notch, you make your struct generic over a Scalar, and switch to conforming to TypedModel
. Now you have a properly "generic" TensorFlow layer (for the way the current DType generic system works... which we might also want to revisit):
struct MyAdvancedModel<Scalar: TensorFlowScalar>: TypedModel {
var conv = Conv2D(...) // this is typealias'd to `TensorFlow.Conv2D<Scalar>`
var flatten = Flatten()
var dense = Dense(...)
func callAsFunction(_ input: Tensor) -> Tensor { dense(flatten(conv(input))) }
}
Finally, if you want to do more advanced things, then you remove all sugar, and go directly to your own generic parameters and Layer
and you're free to do whatever you'd like.
Does that make sense? (That's why you can't make them public top-level. It breaks the progressive disclosure of complexity, but smooth extensions into more and more powerful generic programming.)
I'm not sure I like the "F" abbreviation in
TensorF
Yeah, I wanted Tensor
, but that wouldn't compile for some reason. Seemed also like a compiler bug (or a mis-feature), but I didn't spend time chasing it down.
I'd still prefer for user libraries to define the typealias locally.
I think that we should provide some easier & cleaner way to get started. (Coming back to progressive disclosure of complexity.) I think that right now, all the DType-generics are a significant complexity burden.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The clarifications make sense, thanks! Extending protocols with associated types or typealiases is indeed a usability boost.
The name FloatModel
feels pretty user-understandable.
The name TypedModel
is more generic/useful but less understandable. I can't think of a better alternative though - ScalarParameterizedModel
seems accurate and more understandable but is quite longer. I think a long name is fine because users write it only once per conforming model.
I'd still prefer for user libraries to define the typealias locally.
I think that we should provide some easier & cleaner way to get started. (Coming back to progressive disclosure of complexity.) I think that right now, all the DType-generics are a significant complexity burden.
I agree that ways to avoid writing generic parameters (when they can be inferred, or when a default is sensible) would be nice!
Minor pendantry about "complexity burden", sorry: I feel that <Scalar>
generics are a syntactic burden: it's tiring to write generic types like Tensor<Scalar>
when a suitable Scalar
type is available in the type context (e.g. within the body of Dense<Scalar>
).
I don't feel that <Scalar>
is really conceptually complex - we'd just like better syntactic sugar and defaulting.
let convolved = input.sequenced(through: conv1, pool1, conv2, pool2) | ||
return convolved.sequenced(through: flatten, fc1, fc2, fc3) | ||
} | ||
} | ||
|
||
// Can't move this to another module apparently... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
// Can't move this to another module apparently...
-
Could name aliasing be the issue?
Typealias' names might be overshadowing underlying types' names:typealias Conv2D = TensorFlow.Conv2D<Float>
-
Maybe typealiases need to be public?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the questions & comments:
re: 1. The idea is to shadow the type names to provide some sugar to avoid needing to specify <Float>
everywhere. But this gets to a larger question (which is why I stopped in this incomplete state): shouldn't symbol resolution work whether the protocol is defined in the same module or a different module? To me, this sounds like a compiler bug, but I wanted to confirm with you and @dabrahams and others if this was intentional behavior. An incomplete web search didn't yield any spec.)
re: 2. Can't use public
in a protocol definition (only in extensions, which I did when experimenting). :-)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this gets to a larger question (which is why I stopped in this incomplete state): shouldn't symbol resolution work whether the protocol is defined in the same module or a different module?
Could you please share the cross-module error message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switch from Foo
to FloatModel
(and uncomment it):
saeta@saeta:~/src/swift-models$ swift build
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:28:24: error: generic parameter 'Scalar' could not be inferred
public var conv1 = Conv2D(filterShape: (5, 5, 1, 6), padding: .same, activation: relu)
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:28:24: note: explicitly specify the generic arguments to fix this issue
public var conv1 = Conv2D(filterShape: (5, 5, 1, 6), padding: .same, activation: relu)
^
<<#Scalar: TensorFlowFloatingPoint#>>
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:29:24: error: generic parameter 'Scalar' could not be inferred
public var pool1 = AvgPool2D(poolSize: (2, 2), strides: (2, 2))
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:29:24: note: explicitly specify the generic arguments to fix this issue
public var pool1 = AvgPool2D(poolSize: (2, 2), strides: (2, 2))
^
<<#Scalar: TensorFlowFloatingPoint#>>
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:30:24: error: generic parameter 'Scalar' could not be inferred
public var conv2 = Conv2D(filterShape: (5, 5, 6, 16), activation: relu)
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:30:24: note: explicitly specify the generic arguments to fix this issue
public var conv2 = Conv2D(filterShape: (5, 5, 6, 16), activation: relu)
^
<<#Scalar: TensorFlowFloatingPoint#>>
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:31:24: error: generic parameter 'Scalar' could not be inferred
public var pool2 = AvgPool2D(poolSize: (2, 2), strides: (2, 2))
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:31:24: note: explicitly specify the generic arguments to fix this issue
public var pool2 = AvgPool2D(poolSize: (2, 2), strides: (2, 2))
^
<<#Scalar: TensorFlowFloatingPoint#>>
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:32:26: error: generic parameter 'Scalar' could not be inferred
public var flatten = Flatten()
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:32:26: note: explicitly specify the generic arguments to fix this issue
public var flatten = Flatten()
^
<<#Scalar: TensorFlowFloatingPoint#>>
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:33:22: error: generic parameter 'Scalar' could not be inferred
public var fc1 = Dense(inputSize: 400, outputSize: 120, activation: relu)
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:33:22: note: explicitly specify the generic arguments to fix this issue
public var fc1 = Dense(inputSize: 400, outputSize: 120, activation: relu)
^
<<#Scalar: TensorFlowFloatingPoint#>>
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:34:22: error: generic parameter 'Scalar' could not be inferred
public var fc2 = Dense(inputSize: 120, outputSize: 84, activation: relu)
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:34:22: note: explicitly specify the generic arguments to fix this issue
public var fc2 = Dense(inputSize: 120, outputSize: 84, activation: relu)
^
<<#Scalar: TensorFlowFloatingPoint#>>
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:35:22: error: generic parameter 'Scalar' could not be inferred
public var fc3 = Dense(inputSize: 84, outputSize: 10)
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:35:22: note: explicitly specify the generic arguments to fix this issue
public var fc3 = Dense(inputSize: 84, outputSize: 10)
^
<<#Scalar: TensorFlowFloatingPoint#>>
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:40:53: error: use of undeclared type 'TensorF'
public func callAsFunction(_ input: TensorF) -> TensorF {
^~~~~~~
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:40:41: error: use of undeclared type 'TensorF'
public func callAsFunction(_ input: TensorF) -> TensorF {
^~~~~~~
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:27:29: error: use of undeclared type 'FloatModel'
public struct LeNet: Layer, FloatModel {
^~~~~~~~~~
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:27:15: error: type 'LeNet' does not conform to protocol 'Layer'
public struct LeNet: Layer, FloatModel {
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:27:15: error: type 'LeNet.TangentVector' does not conform to protocol 'VectorProtocol'
public struct LeNet: Layer, FloatModel {
^
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:27:15: error: type 'LeNet' does not conform to protocol 'Module'
public struct LeNet: Layer, FloatModel {
^
TensorFlow.Module:2:20: note: protocol requires nested type 'Input'; do you want to add it?
associatedtype Input
^
TensorFlow.Module:3:20: note: protocol requires nested type 'Output'; do you want to add it?
associatedtype Output : Differentiable
^
[2/22] Compiling ImageClassificationModels LeNet-5.swift
saeta@saeta:~/src/swift-models$ git diff
diff --git a/Models/ImageClassification/LeNet-5.swift b/Models/ImageClassification/LeNet-5.swift
index 8e4caa2..3a448b9 100644
--- a/Models/ImageClassification/LeNet-5.swift
+++ b/Models/ImageClassification/LeNet-5.swift
@@ -24,7 +24,7 @@ import ModelSupport
// Additionally, ReLU is used instead of sigmoid activations.
-public struct LeNet: Layer, Foo {
+public struct LeNet: Layer, FloatModel {
public var conv1 = Conv2D(filterShape: (5, 5, 1, 6), padding: .same, activation: relu)
public var pool1 = AvgPool2D(poolSize: (2, 2), strides: (2, 2))
public var conv2 = Conv2D(filterShape: (5, 5, 6, 16), activation: relu)
diff --git a/Support/SugarLayer.swift b/Support/SugarLayer.swift
index 272c6be..99f3dd9 100644
--- a/Support/SugarLayer.swift
+++ b/Support/SugarLayer.swift
@@ -23,12 +23,12 @@ public protocol EasyLayers {
}
-/*
// Tinkercruft...
public protocol TypedModel: Layer {
associatedtype Scalar: TensorFlowScalar
typealias Tensor = TensorFlow.Tensor<Scalar>
+ typealias TensorF = TensorFlow.Tensor<Float>
typealias TTensor = TensorFlow.Tensor
}
@@ -40,4 +40,3 @@ public extension TypedModel where Scalar: TensorFlowFloatingPoint {
}
public protocol FloatModel: TypedModel where Scalar == Float {}
-*/
One thing I did just realize is that I perhaps was getting some weird behavior due to SwiftPM
. A weaker form of what I am getting at that didn't compile before now did compile. I'll need to play around with things a bit further to figure out what's going on, but the error behavior above does appear stable to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/usr/local/google/home/saeta/src/swift-models/Models/ImageClassification/LeNet-5.swift:28:24: error: generic parameter 'Scalar' could not be inferred public var conv1 = Conv2D(filterShape: (5, 5, 1, 6), padding: .same, activation: relu) ^
error: generic parameter 'Scalar' could not be inferred
suggests to me the issue is name aliasing: Conv2D
resolves to struct Conv2D<Scalar>
instead of typealias Conv2D = ...
. The latter doesn't have a generic parameter.
You could verify whether name aliasing is the issue by renaming the typealias to something non-aliasing (typealias Conv2DTest = ...
) and changing callers to use the non-aliasing name. Errors should then go away.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One thing I did just realize is that I perhaps was getting some weird behavior due to SwiftPM.
Update: it appears that you can import
packages you don't actually depend upon in Package.swift
. Fixing that up seems to address one of the bits of weird behavior I was seeing. I think it would be good for SwiftPM (or rather LLBuild?) to sandbox the compiler execution so you don't get this non-deterministic behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could verify whether name aliasing is the issue by renaming the typealias to something non-aliasing (typealias Conv2DTest = ...) and changing callers to use the non-aliasing name. Errors should then go away.
Yes, I've confirmed that this works. (I did conv2D
when I was futzing around with this last night.) In short, it seems odd to me the behavior that's currently implemented in the compiler. Now that I've fixed my SwiftPM build non-determinism, why does EasyLayers
work, but FloatModel
not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update: it appears that you can import packages you don't actually depend upon in Package.swift.
Got a bug report?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got a bug report?
public var flatten = Flatten() | ||
public var fc1 = Dense(inputSize: 400, outputSize: 120, activation: relu) | ||
public var fc2 = Dense(inputSize: 120, outputSize: 84, activation: relu) | ||
public var fc3 = Dense(inputSize: 84, outputSize: 10) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks way cleaner without noisy <Float>
everywhere.
|
||
// Can't move this to another module apparently... | ||
public protocol Foo { | ||
typealias Conv2D = TensorFlow.Conv2D<Float> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding strategies to avoid <Scalar>
everywhere:
I still think that dynamically dtyped tensors (struct Tensor
with no Scalar
generic parameters), a la PyTorch C++ are an interesting direction that may greatly improve usability in practice!
@apaszke's supporting arguments for dynamically dtyped tensors (from a few months ago) made sense to me! Here's my summary, for reference:
- Tensors have shape and dtype. Shape is more useful than dtype to encode in types for static verification, but it's not practical to encode shapes in Swift's type system today (it's dependently typed programming).
- Swift's syntax for declaring generic declarations is not particularly lightweight
<Scalar>
everywhere, possibly with long list of generic constraints. Languages like Haskell have much more concise syntax for working with type parameters.- Given Swift's non-lightweight syntax for generics, users often specialize code (like models) to concrete types like
Tensor<Float>
instead of writing generic code, due only to the syntactic burden of the latter. - This is not ideal if there are uses cases for generic code - I'm not sure whether there are use cases in practice, but a hypothetical one is "running the same model with either
Tensor<Float>
orTensor<Double>
parameters".
- Given Swift's non-lightweight syntax for generics, users often specialize code (like models) to concrete types like
Given 1 (static dtype is not so useful) and 2 (static dtype hurts usability and causes suboptimal design patterns), we can simply move dtype information from static types into dynamic values in Tensor
.
This results in a dynamically dtyped tensor, just like the ones in Python libraries (NumPy, TF, PyTorch, etc) and at::Tensor
in PyTorch C++. Behavior of dynamically dtyped tensors (e.g. type promotion semantics) are well-explored and well-understood by users.
In Swift for TensorFlow: changing struct Tensor<Scalar>
to struct Tensor
(dynamic dtype) would be a big user-facing design change though. Other folks are investigating more statically typed tensors (the opposite direction) too - the usability of that is to be discovered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another argument for removal of dtype: with X10, Tensor's static dtype is not always reflecting the actual tensor runtime type. Mixed precision can mean that Tensor<Float>
can perform computations at lower than Float
precision. So not only are the types inconvenient here, they can also give a false impression to the end-user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another argument for removal of dtype: with X10, Tensor's static dtype is not always reflecting the actual tensor runtime type. Mixed precision can mean that
Tensor<Float>
can perform computations at lower thanFloat
precision.
Totally agree.
I think Tensor<Float>
mixed precision is a mini-implementation of dynamic dtype Tensor
- the actual dtype (float16
or float32
) is stored as a value. Full removal of dtype would be more general.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moving comment by @dabrahams to this thread:
I'm pretty suspicious of the idea of removing the static scalar type,
particularly the implications it would hold for generating optimized code
for CPUs, FWIW.
Aside from the main topic: usability and performance seem like separate concerns to me.
I would say that usability (the topic of this thread, for dynamic dtype tensor) is a first principle, over performance. Though usable APIs should be designed with performance in mind, like verifying that generated SIL is or has potential to be optimized.
SIMD and atomic types come to mind as examples. The general consensus seems to be that generic types (e.g. SIMD2<Float>
) are a good usable interface, but can be optimized via specialization to have no extra cost.
Dynamic dtype tensor seems harder to optimize though. I heard operations between dynamic dtype tensors and concrete scalar types may involve opening existentials - I don't know how well that can be optimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say that usability (the topic of this thread, for dynamic dtype tensor) is a first principle, over performance.
Note: I think it's important to recognize that design choices have tradeoffs, and I don't think picking one over the other as a blanket philosophy is appropriate. :-)
I just want to point out that there are lots of ways to get to a syntax that drops pervasive use of public let leNet = Sequence<Float>()
.conv2D(filterShape: (5, 5, 1, 6), padding: .same, activation: relu)
.avgPool2D(poolSize: (2, 2), strides: (2, 2))
.conv2D(filterShape: (5, 5, 6, 16), activation: relu)
.avgPool2D(poolSize: (2, 2), strides: (2, 2))
.flatten
.dense(inputSize: 400, outputSize: 120, activation: relu)
.dense(inputSize: 120, outputSize: 84, activation: relu)
.dense(inputSize: 84, outputSize: 10) This particular arrangement might not work, but that's not the point—there are other alternatives. I just ask that when considering syntax improvements, we step back from our existing codebase and consider which requirements are fundamental. It can be really useful to prove out syntax concepts in small, self-contained experiments that don't depend so much on the current state of the world, where we can separate the inherent implications that fall out of any design from those that are just based on where we are now. |
I'm pretty suspicious of the idea of removing the static scalar type,
particularly the implications it would hold for generating optimized code
for CPUs, FWIW.
…On Mon, Apr 27, 2020 at 9:36 AM Dan Zheng ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In Models/ImageClassification/LeNet-5.swift
<#465 (comment)>
:
> let convolved = input.sequenced(through: conv1, pool1, conv2, pool2)
return convolved.sequenced(through: flatten, fc1, fc2, fc3)
}
}
+
+// Can't move this to another module apparently...
+public protocol Foo {
+ typealias Conv2D = TensorFlow.Conv2D<Float>
Another argument for removal of dtype: with X10, Tensor's static dtype is
not always reflecting the actual tensor runtime type. Mixed precision can
mean that Tensor<Float> can perform computations at lower than Float
precision.
Totally agree.
I think Tensor<Float> mixed precision is a mini-implementation of
dtype-less Tensor - the actual dtype (float16 or float32) is stored as a
value. Full removal of dtype would be more general.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#465 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAKYINSCYGG4HCIPPBTMCTROWYCVANCNFSM4MROZDMQ>
.
--
-Dave
|
One key argument for keeping the generic type is: that argument lets us know what we can back-propagate through vs not. As a result, keeping that generic type actually improves usability to some degree, as it lets us get better AD errors. |
That's a fair point. Static However, I do believe it's possible to compensate for the lack of compile-time safety with good runtime error messages with stack traces. Python n-d array libraries are good proof of this (PyTorch has great runtime error messages, last I checked). The tradeoff is between:
|
Yup, definitely possible as well. There are some systems where you silently get zeros / etc., and my prior is fairly strongly opposed to that. But I haven't thought through enough of the cases. |
@saeta I can't reproduce your compilation error because of the below, but the most likely explanation for your cross-module problem is that the
|
Now I can't reproduce the compile error either... I'm wondering if I was just really badly hit by SR-12688. (Doing a lot of |
Heh, my bad! I never ever use |
…exible. Additionally, add these helpers to simplify `DLRM` and `MLP` from the recommendations models package.
This change explores stuffing some extra typealiases in a protocol to obviate
the need to specify
Foo<Float>
everywhere. Unfortunately, it appears thatthe typealiases don't override the imports when they're defined in another
module.
Disadvantages of this approach: it might make it more confusing for someone
who is unfamiliar with this pattern to understand what is specifying the
scalar parameter.
Finally, in certain usages, typealiases even within the same module did not
work, and had to use
Self.$TYPEALIAS
to get it to compile, even in thesame module. Also, although I didn't play around with it much, the
Tensor
typealias also caused a number of problems in the typechecker.