{"id":837,"date":"2016-02-06T18:46:33","date_gmt":"2016-02-06T18:46:33","guid":{"rendered":"http:\/\/hpca22.site.ac.upc.edu\/?page_id=837"},"modified":"2016-03-20T11:41:22","modified_gmt":"2016-03-20T11:41:22","slug":"keynotes","status":"publish","type":"page","link":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/program\/keynotes\/","title":{"rendered":"Keynotes"},"content":{"rendered":"<h2>Keynotes<\/h2>\n<p>&nbsp;<\/p>\n<h2 style=\"text-align: justify;font-size: 22px;\"> <a name=\"key1\">Keynote I:<\/a> Madan Musuvathi, <em>Microsoft<\/em><\/h2>\n<p>&nbsp;<br \/>\nMonday, March 14<sup>th<\/sup> (8:30am-9:30am)<br \/>\n&nbsp;<\/p>\n<p>Title: <b>Beyond the embarrassingly parallel \u2013 New languages, compilers, and runtimes for big-data processing<\/b><br \/>\n&nbsp; <\/p>\n<p><em>Abstract:<\/em><br \/>\nLarge-scale data processing requires large-scale parallelism. Data-processing systems from traditional databases to Hadoop and Spark rely on embarrassingly-parallel relational primitives (e.g. map, reduce, filter, and join) to extract parallelism from input programs. But many important applications, such as machine learning and log processing, iterate over large data sets with true loop-carried dependences across iterations. As such, these applications are not readily parallelizable in current data-processing systems.<br \/>\n&nbsp;<br \/>\nIn this talk, I will challenge the premise that parallelism requires independent computations. In particular, I will describe a general methodology for extracting parallelism from dependent computations. The basic idea is replace dependences with symbolic unknowns and execute the dependent computations symbolically in parallel. The challenge of parallelization now becomes a, hopefully mechanizable, task of performing the resulting symbolic execution efficiently. This methodology opens up the possibility of designing new languages for data-processing computations, compilers that automatically parallelize such computations, and runtimes that exploit the additional parallelism. I will describe our initial successes with this approach and the research challenges that lie ahead.<br \/>\n&nbsp; <\/p>\n<p><em>Bio:<\/em><br \/>\nMadan Musuvathi is a Principal Researcher at Microsoft Research working in the intersection of programming languages and systems, with specific focus on concurrency and parallelism. His interests span program analysis, systems, model checking, verification, and theorem proving. His research has led to several tools that improve the lives of software developers both at Microsoft and at other companies. He received his Ph.D. from Stanford University in 2004.<br \/>\n&nbsp;<\/p>\n<h2 style=\"text-align: justify;font-size: 22px;\"> <a name=\"key2\">Keynote II:<\/a> Keshav Pingali, <em>U. Texas<\/em><\/h3>\n<p>&nbsp;<br \/>\nTuesday, March 15<sup>th<\/sup> (8:30am-9:30am)<br \/>\n&nbsp;<\/p>\n<p>Title: <b>50 Years of Parallel programming: Ieri, Oggi, Domani<sup>*<\/sup><\/b><br \/>\n&nbsp; <\/p>\n<p><em>Abstract:<\/em><br \/>\nParallel programming started in the mid-60\u2019s with the pioneering work of Karp and Miller, David Kuck, Jack Dennis and others, and as a discipline, it is now 50 years old. What have we learned in the past 50 years about parallel programming? What problems have we solved and what problems remain to be solved? What can young researchers learn from the successes and failures of our discipline? This talk is a personal point of view about these and other questions regarding the state of parallel programming.<br \/>\n&nbsp;<br \/>\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <sup>*<\/sup>The subtitle of the talk is borrowed from the title of a screenplay by Alberto Moravia, and it is Italian for \u201cYesterday, Today, Tomorrow.\u201d<br \/>\n&nbsp; <\/p>\n<p><em>Bio:<\/em><br \/>\nKeshav Pingali is the W.A.\u201cTex\u201d Moncrief Chair of Computing in the Computer Sciences department at the University of Texas, Austin. He was on the faculty of the Department of Computer Science at Cornell University from 1986 to 2006, where he held the India Chair of Computer Science.<br \/>\n&nbsp;<\/p>\n<p>Pingali\u2019s research has focused on programming languages and compiler technology for program understanding, restructuring, and optimization. His group is known for its contributions to memory-hierarchy optimization; some of these have been patented. Algorithms and tools developed by his projects are used in many commercial products such as Intel\u2019s IA-64 compiler, SGI\u2019s MIPSPro compiler, and HP\u2019s PA-RISC compiler. His current research is focused on programming languages and tools for multicore processors.<br \/>\n&nbsp;<\/p>\n<p>Pingali is a Fellow of the IEEE, ACM, Fellow of the American Association for the Advancement of Science, and is the Editor-in-chief of the ACM Transactions on Programming Languages and Systems. He also serves on the NSF CISE Advisory Committee (2009-2011).<br \/>\n&nbsp;<\/p>\n<h2 style=\"text-align: justify;font-size: 22px;\"> <a name=\"key3\">Keynote III:<\/a> Avinash Sodani, <em>Intel<\/em><\/h2>\n<p>&nbsp;<br \/>\nWednesday, March 15<sup>th<\/sup> (8:30am-9:30am)<br \/>\n&nbsp;<\/p>\n<p>Title: <b>Knights Landing Intel Xeon Phi CPU: Path to Parallelism with General Purpose Programming<\/b><br \/>\n&nbsp; <\/p>\n<p><em>Abstract:<\/em><br \/>\nThe demand for high performance will continue to skyrocket in the future, fueled by the drive to solve the challenging problems in scientific world and to provide the horsepower needed to support the compute-hungry use cases that continue to emerge in commercial and consumer space, such as machine learning and deep data analytics. Exploiting parallelism will be crucial in achieving the huge performance gain required to solve these problems. This talk will present the new Xeon Phi Processor, called Knights Landing, which is architected to provide massive amounts of parallelism in a manner that is accessible with general purpose programming. The talk will provide insights into 1) the important architecture features of the processor and 2) the software technology to explore them. It will provide the inside story on the various architecture decisions made on Knights Landing \u2013 why we architected the processor the way we did, and on a few programming experience \u2013 how the general purpose programming model makes it easy to exploit parallelism on Xeon Phi. It will show measured performance numbers from the Knights Landing silicon on a range of workloads. The talk will conclude with showing the historical trends in architecture and what they mean for software as we extend the trends into the future.<br \/>\n&nbsp; <\/p>\n<p><em>Bio:<\/em><br \/>\nAvinash Sodani is a Senior Principal Engineer at Intel Corporation and the chief architect of the Xeon-Phi Processor called Knights Landing. He specializes in the field of High Performance Computing (HPC). Previously, he was one of the architects of the 1st generation Core processor, called Nehalem, which has served as a foundation for today\u2019s line of Intel Core processors. Avinash is a recognized expert in computer architecture and has been invited to deliver several keynotes and public talks on topics related to HPC and future of computing. Avinash holds over 20 US Patents and is known for seminal work on the concept of \u201cDynamic Instruction Reuse\u201d. He has a PhD and MS in Computer Science from University of Wisconsin-Madison and a B.Tech (Hon\u2019s) in Computer Science from Indian Institute of Technology, Kharagpur in India.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Keynotes &nbsp; Keynote I: Madan Musuvathi, Microsoft &nbsp; Monday, March 14th (8:30am-9:30am) &nbsp; Title: Beyond the embarrassingly parallel \u2013 New languages, compilers, and runtimes for big-data processing &nbsp; Abstract: Large-scale data processing requires large-scale parallelism. Data-processing systems from traditional databases to Hadoop and Spark rely on embarrassingly-parallel relational primitives (e.g. map, reduce, filter, and join) [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"parent":307,"menu_order":0,"comment_status":"open","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-837","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/wp-json\/wp\/v2\/pages\/837","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/wp-json\/wp\/v2\/comments?post=837"}],"version-history":[{"count":10,"href":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/wp-json\/wp\/v2\/pages\/837\/revisions"}],"predecessor-version":[{"id":936,"href":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/wp-json\/wp\/v2\/pages\/837\/revisions\/936"}],"up":[{"embeddable":true,"href":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/wp-json\/wp\/v2\/pages\/307"}],"wp:attachment":[{"href":"https:\/\/hpca22.site.ac.upc.edu\/index.php\/wp-json\/wp\/v2\/media?parent=837"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}