Text Generation

Language models have proved to be very useful in the analysis of texts using contextualized embeddings. However, there are other possible applications.

In this talk, we will take a look at various mechanisms of text generation with transformer models. We will use publicly available models and show the results when different training data has been used. We will take a look at GPT-2 from OpenAI which has been openly available for quite some time. The more powerful GPT-3 is not open, unfortunately, but similar results can be achieved with the free GPT-NEO from EleutherAI.

Finally, we will apply the same models for automatic translation and detect similarities between sentences in different languages.

Speaker

 

Sidhart Ramachandran
Sidhart Ramachandran leads a team of data scientists building data products that help businesses and customers. He has over 10 years of experience in software engineering and data science across telecom, banking, and marketing.

M3-Newsletter

Ihr möchtet über die Minds Mastering Machines
auf dem Laufenden gehalten werden?

 

Anmelden