<div dir="ltr"><div>Hi everyone, </div><div><br></div><div>Just a reminder about the talk tomorrow. Hope to see you there. </div><div><br></div><div>Bryan</div><div><br></div><br><div class="gmail_quote"><div dir="ltr">On Thu, Jan 17, 2019 at 3:35 PM Bryan Tripp <<a href="mailto:bptripp@gmail.com">bptripp@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><div dir="ltr">Hi everyone, <div><br></div><div>Please join us for next week's talk by Roland Memisevic, who was previously a professor at MILA, and now has a company in Toronto. This talk is a bit outside our usual scope, but it's a great example of getting neural networks to develop a sophisticated understanding of visual signals. </div><div><br></div><div>Bryan</div><div><br></div><div><div>Building a context-aware AI avatar</div><div>Roland Memisevic<br></div><div><br></div><div>At TwentyBN we are building an AI system that interacts with you while "looking" at you - allowing it to understand your behaviour, your surroundings and the full context of the engagement. At the core of this technology is a crowd acting-platform, that allows humans to engage with and teach our system about everyday aspects of our lives and of our physical world. This allows us to instill a human-like "common sense" understanding of everyday scenes and situations in neural networks, and to train them to have conversations with rich visual context. I will also describe how this approach is powering "Millie", our interactive digital avatar we unveiled at NeurIPS 2018. </div></div><div><br></div></div></div></div>
</blockquote></div></div>