RUTGERS EXPERIMENTAL MATHEMATICS SEMINAR

sponsored by the

Rutgers University
Department of Mathematics

and the

Center for Discrete Mathematics and Theoretical Computer Science (DIMACS)

Founded 2003 by Drew Sills and Doron Zeilberger.

Former co-organizers: Drew Sills (2003-2007), Moa ApaGodu (2005-2006), Lara Pudwell (2006-2008), Andrew Baxter (2008-2011), Brian Nakamura (2011-2013), Edinah Gnang (2011-2013), Matthew Russell (2013-2016), Nathan Fox (2016-2017), Bryan Ek (2017-2018), Mingjia Yang (2018-2020), Yonah Biers-Ariel (2018-2020)

Current co-organizers:
Doron Zeilberger (doronzeil {at} gmail [dot] com)
Robert Dougherty-Bliss (robert {dot} w {dot} bliss {at} gmail [dot] com)

Archive of Previous Speakers and Talks You can find links to videos of some of these talks as well. Currently, our videos are being posted to our Vimeo page. Previously, we had videos posted on our YouTube page.


If you would like to be added to the weekly mailing list, email Robert Dougherty-Bliss (robert {dot} w {dot} bliss {at} gmail [dot] com)

Forthcoming Talks

Fall 2021 Semester



Date: Dec. 2 , 2021, 5:00pm (Eastern Time) Zoom Link [password: The 20th Catalan number, alias (40)!/(20!*21!), alias 6564120420 ]

Speaker: Gil Kalai, Hebrew University of Jerusalem

Title: Sycamore and other quantum supremacy experiments

Abstract: The notable claim of quantum supremacy presented by Google's team in 2019 consists of demonstrating the ability of a quantum circuit to generate, albeit with considerable noise, bitstrings from a distribution that is considered hard to simulate on classical computers. Google's team estimated that the task performed by their quantum circuit would take 10,000 years on a classical supercomputer. In 2020 a team from the University of Science and Technology of China (USTC) claimed that the sampling task computed by their quantum computer, which differs from Google's, would take 2.5 billion years to perform on a classical supercomputer! In the lecture I will discuss three aspects of these claims.

A) The quality of the statistical tools and methods

B) The strength of the claims from computational complexity

C) The quality of the data



Date: Dec. 9 , 2021, 5:00pm (Eastern Time) Zoom Link [password: The 20th Catalan number, alias (40)!/(20!*21!), alias 6564120420 ]

Speaker: James Davenport, Univeristy of Bath, UK

Title: Experimental Complexity Theory?

Abstract: Complexity theory is generally a two-handed piece between the upper bound O(f(n)) algorithm designers and the lower bound Ω(f(n)) example builders. If they agree, we're in Θ(f(n)) paradise. Implicit in this is "worst case''. Only rarely does "average case'' complexity get mentioned, not least because even defining "average case'' is hard. What the user of an algorithm is really interested in, of course, is "complexity on my problems''. Failing this, we could at least ask for "complexity on typical problems'', which raises "what is typical''. This is normally answered by having a collection of typical problems, something many fields (e.g. my own computer algebra) are pretty poor at. I will contrast this with the situation in SAT-solving, and finish with some ideas for the future.


Spring 2022 Semester


Date: Jan. 27, 2022, 5:00pm (Eastern Time) Zoom Link [password: The 20th Catalan number, alias (40)!/(20!*21!), alias 6564120420 ]

Speaker: Donald E. Knuth

Title: tbd

Abstract: tbd