-
Notifications
You must be signed in to change notification settings - Fork 180
Tensor String Formatting
When you’re exploring TorchSharp in a notebook, or debugging in Visual Studio, it’s can be very useful to print the contents of tensors using ToString(), just as it is for most data types. ToString() is what the Visual Studio debugger uses by default to show values in the 'locals' or 'autos' windows, too.
However, since tensors can be very large and organized into more than two dimensions, it can be challenging to use the default debugger view to make sense of them, or to fit them in a small area in the UI.
For that reason, and to provide flexibility, we have defined an overload of ToString() that takes additional arguments. It allows you to decide how much information to include, and how to present it.
public string ToString(TensorStringStyle style,
string fltFormat = "g5",
int width = 100,
CultureInfo? cultureInfo = null,
string newLine = "")
The most significant of these arguments is the style, which is declared this way:
public enum TensorStringStyle
{
Metadata, // Print only shape, dtype, and device
Julia, // Print tensor in the style of Julia
Numpy, // Print tensor in the style of NumPy
CSharp, // Print tensor in the style of C#
Default, // Pick up the style to use from torch.TensorStringStyle
}
The overridden ToString()
uses the Metadata
style, which includes the shape, element type, and device of the tensor, but no data.
CSharp
, Julia
and Numpy
offer three different styles of organizing the contents of a tensor, either in the style of the C# language syntax for multi-dimensional arrays, how the Julia language prints multi-dimensional arrays, or in the style that numpy
arrays are printed, which is also how PyTorch tensors are printed. We added both of the latter styles to TorchSharp, since there are plenty of advocates of either style.
Here’s a quick look at how the styles differ. First, let’s look at a 3x3 matrix filled with the number '1':
torch.ones(3,3).ToString(TensorStringStyle.Metadata)
[3x3], type = Float32, device = cpu
torch.ones(3,3).ToString(TensorStringStyle.Julia)
[3x3], type = Float32, device = cpu
1 1 1
1 1 1
1 1 1
torch.ones(3,3).ToString(TensorStringStyle.Numpy)
[[1 1 1]
[1 1 1]
[1 1 1]]
There is not a huge difference between the latter two, by any means. However, the difference is clearer when there are more than two dimensions:
torch.ones(3,3,3).ToString(TensorStringStyle.Julia)
[3x3x3], type = Float32, device = cpu
[0,..,..] =
1 1 1
1 1 1
1 1 1
[1,..,..] =
1 1 1
1 1 1
1 1 1
[2,..,..] =
1 1 1
1 1 1
1 1 1
torch.ones(3,3,3).ToString(TensorStringStyle.Numpy)
[[[1 1 1]
[1 1 1]
[1 1 1]]
[[1 1 1]
[1 1 1]
[1 1 1]]
[[1 1 1]
[1 1 1]
[1 1 1]]]
Which one is clearer and easier to read is a matter of individual taste, which is why we included both. For anyone familiar with the Numpy way of representing data, the latter is going to feel familiar.
The C# style is particularly useful if you want to take the elements of a tensor and integrate into code, hard-coding a tensor — you should be able to copy the output and paste it into your code:
var randomized = torch.tensor(new float [,,] {
{{0.39968f, 0.80894f, 0.97281f},
{0.020772f, 0.33976f, 0.096514f},
{0.014841f, 0.82176f, 0.84639f}},
{{0.29641f, 0.64621f, 0.34868f},
{0.13714f, 0.53577f, 0.26223f},
{0.56427f, 0.2322f, 0.17051f}}
});
There’s a fifth "style" — Default
, which simply picks up the style from a global property, torch.TensorStringStyle
. More on that later.
Typing .ToString(TensorStringStyle.Numpy)
just to get the string representation of a tensor, that is a lot of overhead, especially in an interactive notebook environment. Therefore, we have defined a couple of shortcuts for the common cases — str()
, jlstr()
, and npstr()
jlstr()
takes all the arguments that ToString()
takes, except for the style — it is always Julia
. Likewise, npstr()
takes all the arguments except the style, which is always Numpy
. str()
does take all the arguments, but the style goes last instead of first, so that you don’t have to use it if the global default is what you want. ToString()
couldn’t be defined to take a default style argument, since that would not have allowed it to be distinguished from the overridden ToString()
.
There’s also an extension method print()
which acts a lot like str()
, but rather than returning a string, it uses Console.WriteLine()
to output the string representation. This is primarily useful in interactive notebooks, where only the final value of a cell is automatically printed — if you want more than one tensors printed, you have to use print()
on all but the last one.
What about the other arguments to ToString()
and its related functions?
public string ToString(TensorStringStyle style,
string fltFormat = "g5",
int width = 100,
CultureInfo? cultureInfo = null,
string newLine = "")
The fltFormat
is only used when the element type is a floating point or complex type. It defines what formatting to use for each element in the tensor, and is ignored for any other element type. The width is useful for giving the implementation an idea of how wide the area where the string is to be presented is, so that it can choose how to shorten it, if the number of columns is great.
For example, rather than this:
torch.randn(3,100).jlstr(width:550)
[3x100], type = Float32, device = cpu
0.39961 0.67923 0.96858 -0.58939 0.30464 -1.4023 0.73922 0.27924 0.15534 -0.17394 -0.75972 -0.4809 0.6488 0.31183 1.4443 0.30188 -1.552 -1.4242 0.95294 -0.79964 -0.19688 0.54915 -1.694 -0.16999 -1.5105 -1.1646 0.26663 0.31835 -0.16399 0.65796 2.3718 -1.4615 -0.21647 -1.8211 -2.2843 0.19584 0.64533 4.0861 -0.3737 0.63961 -1.7271 -0.91983 0.63095 -0.53881 1.1303 0.019958 -0.54106 0.28487 0.46427 -0.87434 -0.46904 -0.98492 0.0066073 -0.77566 -1.3657 -0.69453 0.20453 -0.34611 -0.0044542 -0.58607 -0.41976 0.97061 -1.1706 -2.2289 0.24236 ...
0.88228 -0.47664 -0.31813 0.60438 -0.05668 -1.1634 -1.9318 -2.3889 0.24335 2.6409 0.07949 -0.43515 -0.60258 0.56644 -0.31572 0.94659 -1.5979 0.3805 -0.48523 -0.042575 -0.077746 -0.61351 -1.2998 0.073437 -0.16761 -0.023346 1.2415 -0.096478 -2.4586 -0.60854 0.53945 0.91313 1.7459 0.51647 1.0187 0.29882 0.83831 -0.74449 -0.76438 0.84669 0.21551 0.97651 0.40308 -0.82097 2.6452 1.6749 0.39924 -1.68 -0.59928 -0.47864 0.62844 -0.91221 -1.5619 -0.1737 2.1091 0.48384 0.31416 -0.68605 -1.0708 0.32494 -0.95945 -0.95971 0.18031 0.55215 0.46873 ...
-1.1164 -0.52894 -0.20548 -0.016545 0.1275 -0.52425 2.2246 -0.039074 -0.19637 0.54675 1.4729 -0.14612 -0.23858 -1.3437 0.66999 0.085442 0.88969 0.035115 -1.5373 0.91888 1.4123 -0.038741 -0.45409 0.72637 -0.37498 -0.32789 -0.44713 0.91136 -0.8941 0.82456 1.5407 0.38834 -0.041698 0.35848 -0.56781 0.039923 0.35902 -0.10686 0.38838 -0.62046 -0.87547 -0.45941 -0.41666 2.7599 2.1758 -0.18765 0.056884 -1.1636 1.1026 -2.4018 -0.47213 1.1933 -0.99285 -0.23025 -0.28462 -0.80987 -3.3205 -0.089942 1.4988 0.18078 -1.8728 0.16525 1.069 1.5774 1.326 ...
you get something like this:
torch.randn(3,100).jlstr(width:100)
[3x100], type = Float32, device = cpu
0.38575 0.14682 -1.3343 0.97528 0.16269 0.20911 -1.0213 1.7619 0.42714 -1.2133 1.1741 ...
-0.0034484 0.8973 -0.24118 0.13032 0.089748 -1.2928 -2.0405 -1.1064 -0.33096 -0.28353 -2.6013 ...
-0.13793 -0.62087 0.6404 -0.62104 0.73969 -0.58039 0.75774 -0.67249 -0.42547 0.30504 0.32416 ...
Numpy:
[[0.88569, -1.3835, -1.4953, ... 0.58911, -0.33509, -0.5018]
[-0.71946, 1.6017, -0.46037, ... -0.39442, 0.32384, -0.61707]
[-0.016624, 0.56394, 1.3878, ... -2.0872, -0.73017, -1.058]
...
[0.79632, -0.64905, 0.00013783, ... 1.5004, 0.32415, -0.37784]
[-0.3786, 0.32678, -2.1287, ... -1.2186, -0.43705, -1.6703]
[2.1069, 0.53998, -1.8679, ... -0.057906, 0.84508, 0.01247]]
[25x25], type = Float32, device = cpu, value =
float [,] {
{-0.99011f, 0.38894f, -1.5472f, ... -1.0196f, 0.57087f, -0.64018f},
{0.83486f, -0.90321f, 0.38012f, ... -1.6043f, -0.41244f, 0.4333f},
{1.2219f, -1.3939f, -2.3336f, ... 0.56954f, 0.49401f, 1.6925f},
...
{-0.5786f, -0.8039f, 0.86181f, ... -1.2125f, -0.23528f, -1.1929f},
{0.15651f, -0.091215f, 0.75715f, ... 0.21679f, -0.82713f, 0.23803f},
{-0.73278f, -1.7945f, -1.5174f, ... -0.99687f, -0.38036f, -0.44744f}
}
The culture info argument is what you’d think — a way to localize how data is presented.
torch.randn(3,3).str()
[3x3], type = Float32, device = cpu
0.19155 -1.5751 0.18628
-0.77029 -2.7259 -0.56538
-0.27825 -0.095495 -0.51839
torch.randn(3,3).str(cultureInfo: CultureInfo.GetCultureInfoByIetfLanguageTag("sv-se")) // Swedish / Sweden language code
[3x3], type = Float32, device = cpu
0,92378 −0,29287 −0,4366
−0,19261 −0,19461 0,18311
1,5109 −0,73944 −0,36273
Finally, the newLine argument control which string to use to represent new lines in the formatting. Since the representation of newLine varies between platforms, ToString() will pick up the platform default, but it may be useful (sometimes) to override and use the newLine of choice:
torch.randn(3,3).str(newLine: "|\n")
[3x3], type = Float32, device = cpu|
-1.4442 0.096091 0.32561|
-0.67556 -1.5253 0.040961|
2.5447 -0.17895 -0.021136|
That particular example makes no sense (to me), but it hopefully demonstrates how the newLine string is used.
The style of output can be controlled by setting
torch.TensorStringStyle = torch.csharp;
However, it is common to want to control the multiple options for formatting all at once, typically at the top of a notebook or program, for some consistency. The function set_print_options() achieves this. Therea are two overloads:
public static void set_printoptions(
int precision,
int? linewidth = null,
string? newLine = null,
bool sci_mode = false,
int? maxRows = null,
int? maxColumns = null)
public static void set_printoptions(
TensorStringStyle? style = null,
string? floatFormat = null,
int? linewidth = null,
string? newLine = null,
int? maxRows = null,
int? maxColumns = null)
In each case, the null
value is intended to indicate an argument that should be ignored by the function. Thus, if you don’t pass floatFormat
it won’t be modified.
The style argument controls what the Default
style represents, i.e. what calls to 'ToString(…)' (not the overridden version) will use. It is overridden by passing a style argument to the output methods when calling them.
Thus:
torch.set_printoptions(maxColumns: 8, maxRows: 8, style:torch.numpy);
torch.randn(25,25).str()
results in the following:
[[1.4958, -0.33898, 0.11638, 0.026303, ... 0.85961, 1.3367, -1.9239, -1.6381]
[-0.7755, 0.67999, 1.3628, 0.047138, ... -2.9721, 0.98992, 0.1971, -0.57758]
[-1.4295, -0.39611, 0.069047, -0.55881, ... 0.5975, -0.82019, -0.17315, 2.4155]
[0.29969, -0.5946, 0.20053, -1.4262, ... -0.32594, 1.2165, -1.4369, 1.7779]
...
[0.50969, -0.73466, -0.18271, -1.0463, ... -0.26934, 0.36326, -0.28367, 1.3282]
[-0.95745, -0.067352, 1.1919, -0.39224, ... -1.5375, -0.098644, 0.60858, 0.3744]
[-0.69249, -0.73952, 1.5442, 0.8282, ... -1.3961, -0.014752, -0.19993, -0.83121]
[-1.9592, 1.2055, 1.6593, 0.005011, ... 1.6831, 0.31518, 0.055996, 0.87214]]
torch.set_printoptions(maxColumns: 10, maxRows: 12, style:torch.csharp);
torch.randn(25,25).str()
produces:
[25x25], type = Float32, device = cpu, value =
float [,] {
{-1.8427f, 2.2181f, -1.244f, 0.09744f, 0.55435f, ... 0.18494f, -0.47441f, 1.8567f, -0.54009f, -0.15402f},
{0.73358f, 0.10621f, -3.1022f, -0.22865f, -0.79587f, ... 2.1146f, 0.31794f, 1.4926f, -0.4481f, -1.5342f},
{1.3869f, 0.86889f, -1.2155f, -1.1103f, -0.18908f, ... 0.6824f, -1.1462f, -1.5912f, -1.4182f, -1.5756f},
{1.7748f, 0.065773f, -0.40603f, 1.1006f, 0.31868f, ... 0.70086f, -0.93237f, 0.61093f, 1.8628f, -0.7045f},
{1.7281f, -1.1126f, 0.17f, -0.37199f, 1.8349f, ... -0.45376f, 0.54242f, 1.5973f, -1.0229f, 0.34773f},
{-0.21007f, -0.38899f, -0.24446f, -0.25956f, 0.039702f, ... 0.47758f, -2.3914f, 2.6445f, -0.32681f, -0.52768f},
...
{1.4807f, -0.049487f, -1.3587f, -1.2338f, 0.5416f, ... 2.057f, -0.62509f, -0.29564f, 0.33226f, 1.1793f},
{-1.0604f, -0.77631f, -0.42134f, -0.15712f, 0.83793f, ... -1.3218f, 0.70963f, 1.1274f, 0.5931f, -0.60369f},
{0.20218f, 0.96822f, 0.79544f, 1.8814f, 0.67919f, ... 0.59385f, -0.53653f, -0.5478f, 0.53749f, -0.53689f},
{1.8203f, -0.32337f, 0.092787f, 1.5743f, 0.86117f, ... -1.3944f, 0.0074593f, -0.26086f, -0.097131f, -0.22297f},
{0.80508f, 0.90638f, 0.79406f, -0.32998f, -1.928f, ... -0.98385f, -2.1722f, -1.3818f, 0.82047f, 0.93217f},
{0.55915f, -0.10572f, 0.32991f, 1.0748f, -0.35665f, ... 0.66517f, 0.79564f, 0.76089f, 0.47551f, -0.55424f}
}
Note that 'maxRows' applies to all but the innermost dimension, while 'maxColums' applies only to the innermost dimension:
torch.set_printoptions(maxColumns: 12, maxRows: 6, style:torch.csharp);
torch.randn(25, 25,25).str()
[25x25x25], type = Float32, device = cpu, value =
float [,,] {
{{0.46484f, 1.1453f, -0.38442f, 0.4994f, 0.87587f, -0.33014f, ... -0.30192f, -0.2579f, 1.3135f, -0.13669f, 0.79367f, 0.2425f},
{0.37502f, 2.143f, 0.33489f, 1.8463f, 0.59611f, 0.50837f, ... -1.2425f, 1.0152f, 0.59719f, 0.7154f, 1.2634f, 0.9853f},
{-0.41492f, -0.12607f, 1.7596f, 0.63303f, 1.1464f, 0.64024f, ... -0.2057f, -0.15431f, 0.81699f, 2.2008f, -0.82719f, 1.0739f},
...
{0.69087f, -0.17838f, 0.89019f, -0.25722f, 1.9851f, -0.23479f, ... 0.80878f, -0.84304f, 1.2156f, 0.83226f, 0.50176f, -2.1067f},
{0.77584f, -1.1449f, -0.42962f, -0.4484f, 0.44536f, -0.58471f, ... -0.89785f, 1.5414f, -0.4349f, 0.94405f, 1.1581f, 0.24505f},
{1.0722f, -0.95368f, 0.51652f, -0.36254f, 2.0657f, 0.66569f, ... -0.37393f, 0.46886f, -0.37475f, -1.5009f, -0.60305f, -0.052035f}},
{{0.40762f, -1.1466f, 0.80702f, -0.44906f, -0.47536f, 0.6193f, ... 0.84878f, 1.1425f, -0.85869f, -0.95534f, -0.78735f, 1.2911f},
{-0.99236f, 0.30361f, -0.52461f, 1.5326f, 0.56965f, -1.581f, ... -1.4571f, 0.4199f, 1.8625f, -0.97361f, 1.2389f, 0.16471f},
{-0.72217f, -1.2829f, -1.4987f, 1.2703f, 0.37606f, -0.32288f, ... 0.058399f, -1.5636f, 1.9677f, -1.7188f, -0.28474f, 1.178f},
...
{-0.41106f, -0.02472f, 0.072064f, -1.6031f, -0.72987f, 0.14279f, ... -0.8858f, 1.5169f, -0.53561f, -0.844f, 1.8163f, -0.39383f},
{0.00066328f, 1.6115f, 1.3762f, -0.28966f, -0.16561f, 0.37084f, ... -1.423f, 0.71974f, 0.63089f, 1.0016f, -1.3764f, 0.32154f},
{-0.4127f, -0.090073f, -2.0051f, -0.78765f, -0.072393f, -1.2063f, ... -0.44321f, 0.076111f, 1.0956f, 0.63558f, 0.039704f, 0.80098f}},
{{-0.55327f, -0.53788f, 1.0142f, 0.61031f, -0.047469f, 0.45438f, ... -0.41831f, 1.9152f, 1.2387f, -0.9188f, -1.6115f, 0.4104f},
{-0.21241f, 0.50108f, 0.15235f, 1.4751f, 2.6085f, 0.36758f, ... -0.099767f, 0.706f, -0.95838f, 0.60112f, 0.57778f, -1.8621f},
{-1.0292f, -0.074851f, 1.4436f, 0.019964f, 0.47841f, -0.38507f, ... -0.23235f, -1.0471f, -0.96821f, 0.21197f, -1.2641f, 1.4222f},
...
{0.35346f, -1.0978f, 1.0783f, -1.8826f, 0.019178f, -0.37304f, ... 0.4707f, 0.31425f, 0.36623f, 0.08892f, -0.063257f, -0.67198f},
{0.91634f, -0.82555f, -0.01013f, -0.5339f, 0.23621f, 0.36602f, ... -1.14f, -0.75168f, -0.74534f, -0.41746f, 0.052348f, -0.18795f},
{1.4531f, 0.0053751f, -0.08764f, -0.95183f, -0.92927f, -0.21766f, ... -1.2244f, -0.078415f, -0.0055161f, 0.53775f, 1.6352f, 0.56129f}},
...
{{-1.3021f, -1.6515f, 0.56779f, 1.2135f, -0.58811f, 0.72875f, ... -0.59124f, 3.2078f, -0.8504f, 1.2613f, 0.80012f, -0.70036f},
{0.22347f, 0.08081f, 0.35854f, -0.72718f, 1.2302f, -1.0545f, ... 0.30114f, 1.5492f, -1.2702f, 0.65449f, -0.84743f, 0.94082f},
{-0.027613f, -0.40343f, 0.53576f, -1.7952f, 0.97557f, 0.25007f, ... -1.8858f, 0.96947f, -0.07066f, 2.3351f, -1.0553f, -0.095803f},
...
{-1.1922f, -0.38008f, 0.76364f, 0.61576f, 0.31733f, -2.218f, ... 0.14689f, 1.6132f, -0.70541f, 1.6334f, 0.10102f, 1.1562f},
{0.49189f, -0.076451f, -0.52084f, -0.97036f, 0.22965f, 0.51458f, ... -0.83521f, -1.0956f, 0.61623f, 1.7269f, -0.17677f, -1.4558f},
{-0.10357f, 0.74027f, -0.57478f, -1.159f, -0.1414f, -1.6482f, ... -0.24374f, -0.40227f, 0.58743f, -0.67235f, 0.67302f, 1.9821f}},
{{-0.73775f, -0.75068f, 0.0088557f, 0.61804f, 0.80365f, 1.8603f, ... 0.73353f, 1.9496f, -0.45968f, 0.91987f, 0.0057179f, 0.41747f},
{-0.19698f, 0.30937f, -0.53768f, 0.18932f, 0.3736f, 0.11991f, ... -0.3391f, -1.2925f, -0.054811f, -0.80772f, -0.52429f, -0.59734f},
{0.69847f, 0.022626f, 0.10907f, -1.0658f, 0.22687f, 1.0494f, ... 1.3766f, 1.489f, -0.20537f, 0.88951f, 0.41509f, 1.0855f},
...
{-0.65571f, 0.60492f, -0.94148f, 0.65174f, -0.65159f, 0.10763f, ... 2.1888f, 0.5246f, -1.0183f, -1.0964f, 0.84442f, -1.8904f},
{0.17487f, 0.26863f, 1.993f, -1.2846f, 1.4855f, -0.56595f, ... -1.3877f, -0.76866f, 0.68725f, 0.11849f, 1.5276f, 0.39938f},
{-0.1593f, -0.42161f, 0.034871f, -0.31628f, -0.85763f, -0.16332f, ... 0.87281f, 2.2595f, -0.83152f, -1.3625f, -0.58689f, -1.797f}},
{{1.8892f, -0.12277f, -0.89465f, -1.088f, 0.40973f, 0.16207f, ... -0.7639f, -0.064618f, -1.0051f, -0.0016508f, -0.79972f, -2.304f},
{-0.48759f, 0.92317f, 0.43859f, -1.0237f, -1.109f, 0.16239f, ... -0.55939f, 1.2974f, -0.41088f, 0.076437f, 0.6151f, 0.14237f},
{0.19251f, -0.49516f, -0.92816f, 0.76883f, 0.80422f, 0.37741f, ... 0.10201f, -1.8293f, 0.059122f, 0.19057f, -0.34502f, 0.78262f},
...
{0.45584f, -1.0156f, -0.029598f, 0.77693f, 0.60944f, 0.10427f, ... -0.42567f, -0.0088663f, -0.74144f, 0.29558f, -1.4813f, -0.082228f},
{0.34103f, -1.121f, 0.14455f, 0.095671f, 0.47719f, -0.77639f, ... -1.0518f, -1.9267f, -0.46923f, -1.3083f, 0.4856f, -2.461f},
{-0.64854f, 0.85626f, 0.1201f, 0.31007f, -1.3579f, 0.73765f, ... -0.42572f, -0.04279f, -0.53218f, -0.50823f, -1.4264f, 1.0459f}}
}