Invited Speakers
Robert Kowalski, Imperial College London
Peter Norvig, Google
Ed Schonberg, NYU and Adacore
Guido van Rossum, Microsoft
Abstracts
Logic programs and imperative programs employ different notions of computation. Logic programs compute by proving that a goal is a logical consequence of the program, or alternatively by showing that the goal is true in a model defined by the logic program. Imperative programs compute by starting from an initial state, executing actions to transition from one current state to the next, and terminating when the goal is solved.
I will argue that the two notions of computation can be combined, and that the resulting combination is more powerful than the sum of the two. In the proposed combination, exemplified by the computer language LPS (Logic Production System), logic programs represent the beliefs of an agent, and reactive rules of the logical form if antecedent then consequent represent the agent's goals. Computation in LPS generates a model, defined by a logic program, to make the reactive rules true, by making consequents of rules true whenever antecedents become true. The model is a classical model with explicit time, but is constructed by starting from an initial state and destructively updating the current state. The model may be infinite if there is no end of time.
Bio:
Robert Kowalski is Emeritus Professor at Imperial College London. His research is concerned both with developing human-oriented models of computing and computational models of human thinking. His early work in the field of automated theorem-proving contributed to the development of logic programming in the early 1970s. His later research has focused on the use of logic programming for knowledge representation and problem solving. It includes work on the event calculus, legal reasoning, abductive reasoning and argumentation. He received the IJCAI award for Research Excellence in 2011, and the JSPS Award for Eminent Scientists in 2012.
Machine Learning has shown that Large Language Models can solve many problems, including writing code, and explaining solutions to math problems. How reliable are these systems? What would it take to make them better? How will they change what it means to program in the future? This talk attempts some preliminary answers.
Bio:
Peter Norvig is a Distinguished Education Fellow at Stanford's Human-Centered Artificial Intelligence Institute and a researcher at Google Inc; previously he directed Google's core search algorithms group and Google's Research group. He was head of NASA Ames's Computational Sciences Division, where he was NASA's senior computer scientist and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at the University of Southern California, Stanford University, and the University of California at Berkeley, from which he received a Ph.D. in 1986 and the distinguished alumni award in 2006. He was co-teacher of an Artifical Intelligence class that signed up 160,000 students, helping to kick off the current round of massive open online classes. His publications include the books Data Science in Context (to appear in 2022), Artificial Intelligence: A Modern Approach (the leading textbook in the field), Paradigms of AI Programming: Case Studies in Common Lisp, Verbmobil: A Translation System for Face-to-Face Dialog, and Intelligent Help Systems for UNIX. He is also the author of the Gettysburg Powerpoint Presentation and the world's longest palindromic sentence. He is a fellow of the AAAI, ACM, California Academy of Science and American Academy of Arts & Sciences.
In the 1980's SETL was used in a large-scale project in Software Prototyping to create an executable definition of the new language Ada. That executable definition made more precise the static semantics of the language (such as type resolution) as well as the run-time semantics, in particular the Ada model of concurrency. Subsequent versions of Ada continue to address the original design objectives of the language: modularity, data abstraction, a strong static type model, and facilities for correctness analysis. At the same time, the language reflects the development of programming methodologies over the last 30 years, and has grown accordingly in expressiveness and complexity. This evolution puts into question the notion of "high-level" and suggests that in the multi-language environment in which modern software lives, "wide-spectrum" is a better goal for language design.
Bio:
Ed Schonberg is professor of Computer Science (Emeritus) at the COurant Institute of Mathematical Sciences, NYU, and retired vice-president of Adacore Inc. He was part of the group at NYU under the leadership of Jack Schwartz that designed and implemented the programming language SETL. With a team led by Robert Dewar he participated in the construction of the first executable definition of Ada (1983). With Robert Dewar and Richard Kenner he co-founded AdaCore. He has been a member of the Ada Rapporteur Group since 1996, and has participated in the design and implementation of successive versions of Ada, up to the current Ada2022.
Programming languages are designed with many different goals in mind. As language designers we imagine a typical user of our language and a typical task to perform. Then we apply other design criteria, for example, the program must run fast or conserve memory, or it must not experience certain types of errors. Finally there are constraints on the implementation, for example, the program must run on a certain class of hardware or in a given environment, or it must support some form of compatibility with another language. Or maybe we want to optimize the developer experience, for example, we want a fast compiler and linker, or we want the user to get to running code quickly.
Often our ambitions and practicalities don't match. Users (ab)use the language for purposes we had never anticipated. A target platform becomes obsolete -- or wildly successful. Users keep making the same mistakes over and over. Another language popularizes features that we wish our language had. Security vulnerabilities could be unpluggable. Lawyers could even get involved. What's a language designer to do? You can retreat in maintenance (Tcl/Tk), design an ambitious new language (Perl 6), or build a translator (TypeScript). Most languages evolve, more or less successfully.
I invite you to pepper me with questions covering these topics, especially (but not exclusively) when it comes to Python.
Bio:
Guido van Rossum is the creator of the Python programming language. He grew up in the Netherlands and studied at the University of Amsterdam, where he graduated with a Master's Degree in Mathematics and Computer Science. His first job after college was as a programmer at CWI, where he worked on the ABC language, the Amoeba distributed operating system, and a variety of multimedia projects. During this time he created Python as side project. He then moved to the United States to take a job at a non-profit research lab in Virginia, married a Texan, worked for several other startups, and moved to California. In 2005 he joined Google, where he obtained the rank of Senior Staff Engineer, and in 2013 he started working for Dropbox as a Principal Engineer. In October 2019 he retired. After a short retirement he joined Microsoft as Distinguished Engineer in 2020. Until 2018 he was Python's BDFL (Benevolent Dictator For Life), and he is still deeply involved in the Python community. Guido, his wife and their teenager live in Silicon Valley, where they love hiking, biking and birding.