**VENUE: ****GWB Tech Talk Room @Google NYC** **Enter at 9th Ave & 16th St** Please note that this meetup happens in the main Google NYC building. This means we need to provide building security with first/last names of attendees. ** ** **AGENDA:** **5:00 pm Registration Opens** _If you RSVP for this event with an incomplete name or alias, you will be moved to the wait list until that is c
RSVP'd
VENUE: GWB Tech Talk Room @Google NYC
Enter at 9th Ave & 16th St
Please note that this meetup happens in the main Google NYC building. This means we need to provide building security with first/last names of attendees.
AGENDA:
5:00 pm Registration Opens
If you RSVP for this event with an incomplete name or alias, you will be moved to the wait list until that is corrected. We will also collect all no-show name-tags to use for monitoring repeat no-shows.
Registration will close at 6:30pm so that volunteers can attend the talks. If you arrive between 6:30 and 7:00pm, please leave a message for the organizers (via Meetup) and wait in the lobby. We will send a volunteer down for ONE last pass at 7:00pm with badges for you. After 7:00pm we cannot allow any more entries -- we apologize for that!
6:00 pm ANNOUNCEMENTS: Ralph & Nitya
Interested in doing a lightning talk (5-8 mins)? Sign up here (http://bit.ly/gdgny-speaker-signup) or talk to an organizer. We will schedule lightning talks in between featured talks to provide short breaks in between longer sessions.
6:10pm ========== LIGHTNING TALKS ==================
LT-1 Android, Cloud Vision & Robotics: ZeGoBeast in Action (Speaker: Daniel Goncharov)
We would like to share our experience with Google Cloud Vision (GCV) API at ZeGoBeast Robotics. Our talk is about application and use of GCV API to help determine pseudo-emotional response and kinematic feedback for our bot. This is achieved by bundling an Android device for higher logic and connectivity to Google Cloud Services. And Arduino systems for controlling robot GPIO, sensors, and engaging actuators. Please, check out short video of our "guy" running around:
Daniel Goncharov is the co-founder of ZeGoBeast Robotics, an EdTech company. He got his degree in Applied Math & Software Engineering in Russia, and loves to develop with hardware.
LT-2 TensorFlow & Ruby (Speaker: Jason Toy)
Machine Learning has typically been associated with Python because that is where the majority of scientific computing libraries are. Now Rubyists can join the machine learning fun with TensorFlow
Jason Toy is founder/CEO of http://somatic.io, specialized image effects using deep learning.
6:30 pm ========= FEATURED TALK ====================
Reactive Learning Agents
ABSTRACT: This talk will be focused on the design of learning agents using the techniques of reactive machine learning. We’ll explore the difference between software agents (bots), intelligent agents, and learning agents. As the most complex class of agent, a learning agent has a sophisticated internal architecture, which we’ll break down into different capabilities. Finally, we’ll examine how the techniques from reactive machine learning can allow us to build learning capabilities into our agents.
This talk will not just be about AI concepts, though. It will cover a range of pragmatic techniques to aid you in your efforts to implement learning agents. Throughout, we’ll consider where and how we can use external resources like libraries, services, datasets, and even humans to solve sub-problems in the agent design process.
Jeff Smith builds large-scale artificial intelligence systems. For the past decade, he has been working on data science applications at various startups in New York, San Francisco, and Hong Kong. Now, he leads the data engineering team behind Amy, the artificial intelligence who schedules meetings at x.ai.
He is a frequent speaker, blogger, and the author of Reactive Machine Learning Systems, an upcoming book on how to build real world machine learning systems using Scala, Akka, and Spark.
7:30 pm ======== CODE LAB =================
Google Cloud Vision and Natural Language APIs
ABSTRACT: In this session, Sara and Bret will provide an overview of the Google Cloud Vision and Natural Language APIs. They'll walk through a code example that describes how to use each API and give attendees resources to get started on their own.
Google Cloud Vision API enables developers to understand the content of an image by encapsulating powerful machine learning models in an easy to use REST API. It quickly classifies images into thousands of categories (e.g., "sailboat", "lion", "Eiffel Tower"), detects individual objects and faces within images, and finds and reads printed words contained within images.
Google Cloud Natural Language API lets developers perform entity and sentiment analysis on text. The entity analysis feature lets you extract entities from text — like people, places and events — with a single API call. With sentiment analysis, you can determine whether a block of text is positive or negative, along with the overall strength of the sentence.
Sara Robinson and Bret McGowen are Developer Advocates on the Google Cloud Team, both based out of NYC.
8:20 pm ======== DEMO =================
Neural Style in Tensor Flow
ABSTRACT: Neural Style is one of my favorite art experiments that uses machine learning. I'll give a quick demo, and point you to code you use to create your own images at home (it works out of the box!).
Joshua Gordon is a Developer Advocate (Tensor Flow, Machine Learning) at Google, based out of NYC.
8:30pm ======= WRAP-UP ===============
Please fill in the survey at: http://bit.ly/gdgsurvey
Contact Us