Skip to content

Commit

Permalink
update blogs (#3261)
Browse files Browse the repository at this point in the history
  • Loading branch information
jingxu10 authored Sep 17, 2024
1 parent 3703754 commit f4bb0c7
Showing 1 changed file with 7 additions and 1 deletion.
8 changes: 7 additions & 1 deletion blogs.html
Original file line number Diff line number Diff line change
Expand Up @@ -48,11 +48,17 @@
<section id="blogs-publications">
<h1>Blogs &amp; Publications<a class="headerlink" href="#blogs-publications" title="Link to this heading"></a></h1>
<ul class="simple">
<li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/optimize-stable-diffusion-upscaling-with-pytorch.html">Optimize Stable Diffusion Upscaling with Diffusers and PyTorch*, Sep 2024</a></p></li>
<li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/intel-ai-solutions-support-meta-llama-3-1-launch.html">Intel AI Solutions Boost LLMs: Unleashing the Power of Meta* Llama 3.1, Jul 2024</a></p></li>
<li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/intel-ai-solutions-accelerate-alibaba-qwen2-llms.html">Optimization of Intel® AI Solutions for Alibaba Cloud* Qwen2 Large Language Models, Jun 2024</a></p></li>
<li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-meta-llama3-with-intel-ai-solutions.html">Accelerate Meta* Llama 3 with Intel® AI Solutions, Apr 2024</a></p></li>
<li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/optimize-text-and-image-generation-using-pytorch.html">Optimize Text and Image Generation Using PyTorch*, Feb 2024</a></p></li>
<li><p><a class="reference external" href="https://pytorch.org/blog/ml-model-server-resource-saving/">ML Model Server Resource Saving - Transition From High-Cost GPUs to Intel CPUs and oneAPI powered Software with performance, Oct 2023</a></p></li>
<li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/news/llama2.html">Accelerate Llama 2 with Intel AI Hardware and Software Optimizations, Jul 2023</a></p></li>
<li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-training-inference-on-amx.html">Accelerate PyTorch* Training and Inference Performance using Intel® AMX, Jul 2023</a></p></li>
<li><p><a class="reference external" href="https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-intel-dl-boost-improve-inference-performance-of-hugging-face-bert-base-model-in-google-cloud-platform-gcp-technology-guide">Intel® Deep Learning Boost (Intel® DL Boost) - Improve Inference Performance of Hugging Face BERT Base Model in Google Cloud Platform (GCP) Technology Guide, Apr 2023</a></p></li>
<li><p><a class="reference external" href="https://www.youtube.com/watch?v=Id-rE2Q7xZ0&amp;t=1s">Get Started with Intel® Extension for PyTorch* on GPU | Intel Software, Mar 2023</a></p></li>
<li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html">Accelerate PyTorch* INT8 Inference with New X86 Quantization Backend on X86 CPUs, Mar 2023</a></p></li>
<li><p><a class="reference external" href="https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html">Accelerate PyTorch* INT8 Inference with New "X86" Quantization Backend on X86 CPUs, Mar 2023</a></p></li>
<li><p><a class="reference external" href="https://huggingface.co/blog/intel-sapphire-rapids">Accelerating PyTorch Transformers with Intel Sapphire Rapids, Part 1, Jan 2023</a></p></li>
<li><p><a class="reference external" href="https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-improve-inference-performance-of-bert-base-model-from-hugging-face-for-network-security-technology-guide">Intel® Deep Learning Boost - Improve Inference Performance of BERT Base Model from Hugging Face for Network Security Technology Guide, Jan 2023</a></p></li>
<li><p><a class="reference external" href="https://www.youtube.com/watch?v=066_Jd6cwZg">Scaling inference on CPUs with TorchServe, PyTorch Conference, Dec 2022</a></p></li>
Expand Down

0 comments on commit f4bb0c7

Please sign in to comment.