- Seattle, WA
- csteegz.com
Stars
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
A modernized, complete, self-contained TeX/LaTeX engine, powered by XeTeX and TeXLive.
Visually explore, understand, and present your data.
A flexible, high-performance serving system for machine learning models
The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Command line parsing, invocation, and rendering of terminal output.
VS2022 Add-in. Click on any method or class to see what .NET Core's JIT generates for them (ASM).
Local Forwarder is an agent that collects Application Insights or OpenCensus telemetry from a variety of SDKs and routes it to the Application Insights backend.
A purpose-built proxy for the Linkerd service mesh. Written in Rust.
Fast persistent recoverable log and key-value store + cache, in C# and C++.
User interface for recording and managing ETW traces
Easily deploy models to FPGAs for ultra-low latency with Azure Machine Learning powered by Project Brainwave
Accelerate your web app development | Build fast. Run fast.
Microsoft Machine Learning Server Excel Add-in
A native functional ASP.NET Core web framework for F# developers.
Benchmarks of popular .net web frameworks