This talk will cover the basics of building neural networks for software engineers, through neural weights and biases, activation functions, supervised learning, and gradient descent. I'll show you some tips and best practices for effective training, such as learning rate decay, gradient descent regularization, and the subtleties of overfitting. Be aware that dense and convolutional neural networks are key to any modern implementation. This session starts with low-level Tensorflow and also includes a sample of high-level Tensorflow code using layers and data sets.
Wanlin Li (Quebec) holds a Ph.D. in genetics and crop breeding, a master’s degree in biochemistry and molecular biology, and a DESS in bioinformatics. Her dream of becoming a node on the interdisciplinary network brings her to the computer science field. Therefore, she is currently doing a master's degree in computer science with a focus on bioinformatics at Sherbrooke University. Her research is supervised by Nadia Tahiri. Through her training and previous experiences, she has a good knowledge of machine learning algorithms, phylogeography, and the workflow of next-generation sequencing (NGS).
University of Sherbrooke