Stars
120+国内高校课程资源纯手工整理,欢迎补充、修订 https://studyhard.cf/
OCR software, free and offline. 开源、免费的离线OCR软件。支持截屏/批量导入图片,PDF文档识别,排除水印/页眉页脚,扫描/生成二维码。内置多国语言库。
No fortress, purely open ground. OpenManus is Coming.
QUANTAXIS 支持任务调度 分布式部署的 股票/期货/期权 数据/回测/模拟/交易/可视化/多账户 纯本地量化解决方案
AKShare is an elegant and simple financial data interface library for Python, built for human beings! 开源财经数据接口库
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with …
Voice Recognition to Text Tool / 一个离线运行的本地音视频转字幕工具,输出json、srt字幕、纯文字格式
A Flexible Framework for Experiencing Cutting-edge LLM Inference Optimizations
A robust, efficient, low-latency speech-to-text library with advanced voice activity detection, wake word activation and instant transcription.
Universal File Online Preview Project based on Spring-Boot
A simple screen parsing tool towards pure vision based GUI agent
🔥 Turn entire websites into LLM-ready markdown or structured data. Scrape, crawl and extract with a single API.
Speech-to-text, text-to-speech, speaker diarization, speech enhancement, and VAD using next-gen Kaldi with onnxruntime without Internet connection. Support embedded systems, Android, iOS, HarmonyOS…
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Pseudo Streaming SenseVoice with Hotwords
简单实现VAD+声纹锁+SenseVoice完成类语音实时转录的小项目
API and websocket server for sensevoice. It has inherited some enhanced features, such as VAD detection, real-time streaming recognition, and speaker verification.
Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.
Multilingual Voice Understanding Model
We introduce EfficientRAG, an efficient retriever for multi-hop question answering. EfficientRAG iteratively generates new queries without the need for LLM calls at each iteration and filters out i…
OmniControl: Control Any Joint at Any Time for Human Motion Generation, ICLR 2024