.. _sphx_glr_tutorials_models_4_old_wines: .. _tutorials4-index: Revisit classic models from a graph perspective ------------------------------------------------------- * **Capsule** `[paper] `__ `[tutorial] <4_old_wines/2_capsule.html>`__ `[PyTorch code] `__: This new computer vision model has two key ideas. First, enhancing the feature representation in a vector form (instead of a scalar) called *capsule*. Second, replacing max-pooling with dynamic routing. The idea of dynamic routing is to integrate a lower level capsule to one or several higher level capsules with non-parametric message-passing. A tutorial shows how the latter can be implemented with DGL APIs. * **Transformer** `[paper] `__ `[tutorial] <4_old_wines/7_transformer.html>`__ `[PyTorch code] `__ and **Universal Transformer** `[paper] `__ `[tutorial] <4_old_wines/7_transformer.html>`__ `[PyTorch code] `__: These two models replace recurrent neural networks (RNNs) with several layers of multi-head attention to encode and discover structures among tokens of a sentence. These attention mechanisms are similarly formulated as graph operations with message-passing. .. raw:: html
.. thumbnail-parent-div-open .. raw:: html
.. only:: html .. image:: /tutorials/models/4_old_wines/images/thumb/sphx_glr_2_capsule_thumb.png :alt: :ref:`sphx_glr_tutorials_models_4_old_wines_2_capsule.py` .. raw:: html
Capsule Network
.. raw:: html
.. only:: html .. image:: /tutorials/models/4_old_wines/images/thumb/sphx_glr_7_transformer_thumb.png :alt: :ref:`sphx_glr_tutorials_models_4_old_wines_7_transformer.py` .. raw:: html
Transformer as a Graph Neural Network
.. thumbnail-parent-div-close .. raw:: html
.. toctree:: :hidden: /tutorials/models/4_old_wines/2_capsule /tutorials/models/4_old_wines/7_transformer