Come and join a fellowship of Deep Learning and AI enthusiasts in their quest to keep up with the latest developments in artificial intelligence and cloud computing!
This group is for intermediate and advanced practitioners of deep learning utilizing TensorFlow. Our plan is to organize a weekly or bi-weekly event in which we will collectively analyze the latest and seminal research papers in the field of AI. We will be covering anything from computer vision, natural language processing to deep reinforcement learning and Bayesian networks. These meetups are intended to be an open discussion of the selected paper, related research and relevant model coding.
The next meetup we will discuss the Transformer Model with the paper:
Attention is All You Need (https://arxiv.org/abs/1706.03762)
Time to say goodbye to RNNs and LSTMs! Even though LSTMs and GRU derivatives have been prominent for use in NLP models for the past 5 years, their recurrent nature has made them susceptible to exploding and vanishing gradients for large sequences, as well as unsuitable for cloud scaling with parallelization. In this paper, the researchers completely discarded the conventional recurrent or convolution architectures and built a state of the art sequence model based solely on attention mechanisms.
Please read the paper beforehand and try to code it up yourself. We will be covering the code in detail at the meetup. As such, we ask that everyone code in TensorFlow and/or Keras so we are all speaking the same language in our discussion.
The goal of this meetup is to create an interactive study group where we collectively choose the research papers that we'd like to dig into and share the lessons we learn from attempting to execute the architecture in our own applications. Your feedback will be valuable in determining the structure and content of the event in the future.