Neural Machine Translation by Jointly Learning to Align and Translate

Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio
2014
3 references

Abstract

Neural machine translation is a recently proposed approach to machine translation. Unlike the traditional statistical machine translation, the neural machine translation aims at building a single neural network that can be jointly tuned to maximize the translation performance. The models proposed recently for neural machine translation often belong to a family of encoder-decoders and consists of an encoder that encodes a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and propose to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly. With this new approach, we achieve a translation performance comparable to the existing state-of-the-art phrase-based system on the task of English-to-French translation. Furthermore, qualitative analysis reveals that the (soft-)alignments found by the model agree well with our intuition.

2 repositories
3 references

Code References

â–¶ microsoft/onnxruntime
1 file
â–¶ onnxruntime/contrib_ops/cpu/attnlstm/bahdanau_attention.h
1
// Please refer to: https://arxiv.org/pdf/1409.0473.pdf
â–¶ onnx/onnx
1 file
â–¶ docs/proposals/NLPinONNXproposal.md
2
We should work toward being able to represent major classes of NLP model architectures. One example of such an architecture is the seq2seq with attention model (e.g. https://arxiv.org/abs/1409.0473). This architecture is used for many use cases, including neural machine translation, speech processing, summarization, dialog systems, image captioning, and syntax parsing, among many others. At the same time, seq2seq with attention is sufficiently complex that supporting it will push forward the state of the art in ONNX, but not so complex that we'd need to define a full programming language.
[1] https://arxiv.org/abs/1409.0473
Link copied to clipboard!