Skip to content

Commit

Permalink
Update README
Browse files Browse the repository at this point in the history
  • Loading branch information
aliemo committed Jun 22, 2023
1 parent 4a605f8 commit e1dd686
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 4 deletions.
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
> **Research and Materials on Hardware implementation of BERT (Bidirectional Encoder Representations from Transformers) Model**
<center><img src="https://img.shields.io/badge/Status-WIP-ff69b4?style=flat-square"/></center>

<center><img src="https://img.shields.io/badge/Progress-%2599-ef6c00?labelColor=1565c0&style=flat-square"/></center>

<p align="center">
Expand All @@ -12,9 +13,9 @@

## BERT Model

* BERT is a method of **pre-training language representations**, meaning that we **train a general-purpose *language understanding model*** on a large text corpus (like Wikipedia) and then use that model for downstream NLP tasks.**
* BERT is a method of **pre-training language representations**, meaning that we **train a general-purpose *language understanding model*** on a large text corpus (like Wikipedia) and then use that model for downstream NLP tasks.

* BERT was created and **published in 2018 by Jacob Devlin and his colleagues from Google**. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks.**
* BERT was created and **published in 2018 by Jacob Devlin and his colleagues from Google**. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks.

<p align="center">
<img src="./data/img/BERT-ARCH.png" />
Expand Down
5 changes: 3 additions & 2 deletions data/header.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
> **Research and Materials on Hardware implementation of BERT (Bidirectional Encoder Representations from Transformers) Model**

<center><img src="https://img.shields.io/badge/Status-WIP-ff69b4?style=flat-square"/></center>

<center><img src="https://img.shields.io/badge/Progress-%2599-ef6c00?labelColor=1565c0&style=flat-square"/></center>

<p align="center">
Expand All @@ -12,9 +13,9 @@

## BERT Model

* BERT is a method of **pre-training language representations**, meaning that we **train a general-purpose *language understanding model*** on a large text corpus (like Wikipedia) and then use that model for downstream NLP tasks.**
* BERT is a method of **pre-training language representations**, meaning that we **train a general-purpose *language understanding model*** on a large text corpus (like Wikipedia) and then use that model for downstream NLP tasks.

* BERT was created and **published in 2018 by Jacob Devlin and his colleagues from Google**. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks.**
* BERT was created and **published in 2018 by Jacob Devlin and his colleagues from Google**. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks.

<p align="center">
<img src="./data/img/BERT-ARCH.png" />
Expand Down

0 comments on commit e1dd686

Please sign in to comment.