Skip to content

Kruthim1304/Video-Summarization-Using-Pegasus-Transformers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

video-summarisation

What are Transformers Models?

Transformers are a type of artificial neural network architecture that is used to solve the problem of transduction or transformation of input sequences into output sequences in deep learning applications

You may be wondering what we mean by input and output sequences. Well, perhaps thinking of some applications may help.

So the input sentence "Optimus Prime is a cool robot" would require us to produce the output "コンボイはかっこいいロボットです”

One of the main questions in sequence transduction is the learning of representations for both the input and output sequences in a robust manner, so that no distortions are introduced. You do not want to mistranslate an important message.

An approach to tackle this challenge is the use of recurrent neural networks (RNNs). Unlike feed forward neural nets, where inputs and outputs are considered to be independent of each other, the output of an RNN depends on the prior elements of a given sequence.

While attention is an important trick up the sleeve of the decoder, we may ask whether the encoder has any tricks of its own. The answer is yes and it is called self-attention

For instance, take the following input sentence:

"Bumblebee plays some catchy music and dances along to it".

what is the pegasus model?

given long documents to read, our natural preference is to not read, or at least, to scan just the main points. So having a summary would always be great to save us time ⏳ and brain processing power.

However, auto-summarization used to be an impossible task. Specifically, abstractive summarization is very challenging.

On a high level, PEGASUS uses an encoder-decoder model for sequence-to-sequence learning. In such a model, the encoder will first take into consideration the context of the whole input text and encode the input text into something called context vector, which is basically a numerical representation of the input text. This numerical representation will then be fed to the decoder whose job is decode the context vector to produce the summary.


ABSTRACTRIVE

Abstractive summarization uses the Pegasus model by Google. The model uses Transformers Encoder-Decoder architecture. The encoder outputs masked tokens while the decoder generates Gap sentences.

Abstractive summarization aims to take a body of text and turn it into a shorter version. Not only does abstractive summarization shorten the body of texts, but it also generates new sentences.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%