Welcome to bert-embedding’s documentation!

BERT, published by Google , is new way to obtain pre-trained language model word representation. Many NLP tasks are benefit from BERT to get the SOTA.

The goal of this project is to obtain the sentence and token embedding from BERT’s pre-trained model. In this way, instead of building and do fine-tuning for an end-to-end NLP model, you can build your model by just utilizing the sentence or token embedding.

This project is implemented with @MXNet. Special thanks to @gluon-nlp team.

API Reference

Indices and tables