Skip to content

Introducing Tiktoken Tokenizer #6981

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Feb 6, 2024
Merged

Introducing Tiktoken Tokenizer #6981

merged 3 commits into from
Feb 6, 2024

Conversation

tarekgh
Copy link
Member

@tarekgh tarekgh commented Feb 1, 2024

This modification introduces support for the Tiktoken tokenizer into the Microsoft ML tokenizers library. The logic is largely derived from the Microsoft Tokenizers Library, and the update includes optimizations and adjustments to the public APIs. Further refinements for the APIs are pending and are being tracked through issue #6982.

Usage

Tokenizer tokenizer = await Tokenizer.CreateByModelNameAsync("gpt-4");

    // Encoding to Ids
    string text = "Hello World";
    IReadOnlyList<int> encoded = tokenizer.EncodeToIds(text);
    Assert.Equal(new List<int>() { 9906, 4435 }, encoded);
    Assert.Equal(text, tokenizer. Decode(encoded)!);

    // Full encoding to tokens, Ids, and offsets
    TokenizerResult result = tokenizer.Encode(text);
    Assert.Equal(new List<int>() { 9906, 4435 }, result.Ids);
    Assert.Equal(new string[] { "Hello", " World" }, result.Tokens);
    Assert.Equal(new List<(int, int)> { (0, 5), (5, 11) }, result.Offsets);

APIs changes

namespace Microsoft.ML.Tokenizers
{
    public class Tokenizer
    {
+        /// <summary>
+        /// Encodes input text to object has the tokens list, tokens Ids, tokens offset mapping.
+        /// </summary>
+        /// <param name="sequence">The text to tokenize.</param>
+        /// <param name="skipSpecialTokens">Indicate if want to skip the special tokens during the encoding.</param>
+        /// <returns>The tokenization result includes the tokens list, tokens Ids, tokens offset mapping.</returns>
+        public TokenizerResult Encode(string sequence, bool skipSpecialTokens); // overload adding skipSpecialTokens parameter.

+        /// <summary>
+        /// Encodes input text to tokens Ids.
+        /// </summary>
+        /// <param name="sequence">The text to tokenize.</param>
+        /// <param name="skipSpecialTokens">Indicate if want to skip the special tokens during the encoding.</param>
+        /// <returns>The tokenization result includes the tokens list, tokens Ids, tokens offset mapping.</returns>
+        public IReadOnlyList<int> EncodeToIds(string sequence, bool skipSpecialTokens = false);

+        /// <summary>
+        /// Create tokenizer based on model name
+        /// </summary>
+        /// <param name="modelName">Model name</param>
+        /// <param name="extraSpecialTokens">Extra special tokens other than the built-in ones for the model</param>
+        /// <param name="normalizer">To normalize the text before tokenization</param>
+        /// <returns>The tokenizer</returns>
+        public static async Task<Tokenizer> CreateByModelNameAsync(
+                                                string modelName,
+                                                IReadOnlyDictionary<string, int>? extraSpecialTokens = null,
+                                                Normalizer? normalizer = null)
    }

-    public class Split : IEquatable<Split>
+    public readonly struct Split : IEquatable<Split>
     {
-        public Split(string token, (int Index, int End) offset)
+        public Split(string token, (int Index, int End) offset, bool isSpecialToken = false)

+        /// <summary>
+        /// Gets if the current Split is a special token.
+        /// </summary>
+        public bool IsSpecialToken { get; }
    }

    public abstract class PreTokenizer
    {
+        // Primarily focused on optimizing to minimize memory allocations and enable the enumeration of one item at a time,
+        // rather than holding a large list in a collection.
+        // This change will reflect in all public classes which implementing this interface.
-        public abstract IReadOnlyLIst<Split> PreTokenize(string sentence);
+        public abstract IEnumerable<Split> PreTokenize(string sentence, bool skipSpecialTokens = false);
    }

    public sealed class TokenizerResult
    {
-        public TokenizerResult(string originalString, string normalizedString, IReadOnlyList<Split> splits, bool offsetsMappedToOriginalString);
+        public TokenizerResult(string originalString, string normalizedString, IEnumerable<Split> splits, bool offsetsMappedToOriginalString);
    }


    public abstract class Model
    {
+        public virtual IReadOnlyList<Token> Tokenize(string sequence, bool isSpecialToken); // overload to add isSpecialToken parameter.

+        public virtual bool TokenizeToIds(string sequence, bool isSpecialToken, List<int> accumulatedIds); // To be consumed by Tokenizer.EncodeToIds

+        public virtual int? TokenToId(string token, bool skipSpecialTokens); // overload to add isSpecialToken parameter.
   }


+    public sealed class Tiktoken : Model
+    {
+        public Tiktoken(string tikTokenBpeFile, IReadOnlyDictionary<string, int>? specialTokensEncoder = null, int cacheSize = DefaultCacheSize);
+        public Tiktoken(Stream tikTokenBpeFileStream, IReadOnlyDictionary<string, int>? specialTokensEncoder = null, int cacheSize = DefaultCacheSize);

+        public IReadOnlyDictionary<string, int>? SpecialTokens { get; }

+        // Implement the Model abstract methods
+    }

+   public sealed class TikTokenPreTokenizer : PreTokenizer
+   {
+       public TikTokenPreTokenizer(string regexPattern, IReadOnlyDictionary<string, int>? specialTokensEncoder);

+       // Implement the Model abstract methods
+   }

@ghost ghost assigned tarekgh Feb 1, 2024
@tarekgh tarekgh requested a review from michaelgsharp February 1, 2024 22:08
@tarekgh
Copy link
Member Author

tarekgh commented Feb 1, 2024

Copy link

codecov bot commented Feb 1, 2024

Codecov Report

Attention: 210 lines in your changes are missing coverage. Please review.

Comparison is base (902102e) 68.80% compared to head (35e2cbc) 68.81%.

❗ Current head 35e2cbc differs from pull request most recent head 4cd96b3. Consider uploading reports for the commit 4cd96b3 to get more accurate results

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6981      +/-   ##
==========================================
+ Coverage   68.80%   68.81%   +0.01%     
==========================================
  Files        1249     1256       +7     
  Lines      249686   250425     +739     
  Branches    25485    25569      +84     
==========================================
+ Hits       171795   172335     +540     
- Misses      71294    71466     +172     
- Partials     6597     6624      +27     
Flag Coverage Δ
Debug 68.81% <72.62%> (+0.01%) ⬆️
production 63.28% <66.87%> (+0.01%) ⬆️
test 88.44% <100.00%> (+0.02%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
...Microsoft.ML.Tokenizers/PreTokenizer/Whitespace.cs 100.00% <100.00%> (ø)
src/Microsoft.ML.Tokenizers/TokenizerResult.cs 100.00% <100.00%> (+9.09%) ⬆️
...Microsoft.ML.Tokenizers.Tests/PreTokenizerTests.cs 95.31% <100.00%> (ø)
test/Microsoft.ML.Tokenizers.Tests/TitokenTests.cs 100.00% <100.00%> (ø)
...rc/Microsoft.ML.Tokenizers/PreTokenizer/Roberta.cs 57.14% <33.33%> (-19.79%) ⬇️
...c/Microsoft.ML.Tokenizers/Utils/BytePairEncoder.cs 95.23% <95.23%> (ø)
...crosoft.ML.Tokenizers/PreTokenizer/PreTokenizer.cs 83.33% <81.48%> (-7.58%) ⬇️
...Microsoft.ML.Tokenizers/Utils/ByteArrayComparer.cs 65.38% <65.38%> (ø)
src/Microsoft.ML.Tokenizers/Model/Model.cs 7.69% <7.69%> (ø)
src/Microsoft.ML.Tokenizers/Utils/LruCache.cs 66.66% <66.66%> (ø)
... and 3 more

... and 3 files with indirect coverage changes

return true;
}

int[] encodedIds = BytePairEncoder.BytePairEncode(Encoding.UTF8.GetBytes(sequence), _encoder);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be really nice to reduce the overheads here. It can be done separately, but this is a lot of allocation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tracked through the issue #6989

}
}

return utf8Bytes.Count > 0 ? Encoding.UTF8.GetString(utf8Bytes.ToArray()) : string.Empty;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we only target netstandard2.0, or do we multitarget and build this for netcoreapp as well? There are newer APIs that make this cheaper.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tracked through the issue #6989

return outList;
}

private static T[] Slice<T>(this T[] array, int start, int end)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There looks to be a fair amount of allocation being incurred from all this slicing. That can't be reduced?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tracked this in the issue #6989

Copy link
Contributor

@michaelgsharp michaelgsharp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Just that one question about the empty dispose (though I saw it in a couple other places too, my question applies there too)

@tarekgh tarekgh merged commit 6f55525 into dotnet:main Feb 6, 2024
@github-actions github-actions bot locked and limited conversation to collaborators Mar 8, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants