16:54:49 From MONTREAL AI To All Panelists : Hi 16:54:56 From francesca rossi To All Panelists : Hi all 16:54:57 From Artur Garcez To MONTREAL AI(privately) : Hi 16:56:47 From Jürgen of Arabia To All Panelists : Couldn’t figure out how to change it … 17:02:41 From Gary Marcus To All Panelists : after vince introduces me i will speak for roughly four minutes and then introduce Noam (very briefly – all intros will brief) 17:06:00 From francesca rossi To All Panelists : At the top 17:06:07 From francesca rossi To All Panelists : Go to the top with the mouse 17:15:00 From MONTREAL AI To All Panelists : Thank you Francesca, it was hidden by another set of buttons . . . 17:35:37 From Gary Marcus To All Panelists : Anyone on panel want to ask a question? 17:37:48 From francesca rossi To All Panelists : Do you think that neuroscience insights are more promising than cognitive science ones? Knowledge of brain rather than mind. 17:55:27 From Gary Marcus To All Panelists : again let me know if you have q’s (and sorry i missed Francesca’s last time) 17:55:35 From francesca rossi To All Panelists : No problem! 17:57:33 From Artur Garcez To All Panelists : On innateness and commonsense: there is a lot of talk currently about constraining DL. It helps to pay attention to contributions from Symbolic AI, e.g. commonsense requires "jumping to conclusions" as studied in nonmonotonic logic. Formalization of commonsense can help us here! 17:58:01 From Gary Marcus To All Panelists : artur, i’ll ask you to make the point 18:18:21 From francesca rossi To All Panelists : LLMs don’t have bodies, but still they hallucinate. How is Jurgen explaining this? 18:19:15 From Gary Marcus To All Panelists : after this Ben Goertzel by video and then Juergen 18:30:10 From Gary Marcus To All Panelists : juergen is next in about 1-2 min 18:34:42 From Gary Marcus To MONTREAL AI(privately) : when section 3 finishes could you put an onscreen countdown timer for 5 min for a break? 18:36:49 From Gary Marcus To All Panelists : Francesca is next 18:36:58 From francesca rossi To All Panelists : ok 18:37:36 From Artur Garcez To All Panelists : Kai-Fu has joined us. Nice to e-meet you. 18:37:37 From Gary Marcus To All Panelists : good morning Kai-Fu! 18:39:40 From Gary Marcus To All Panelists : jeff you are next 18:39:50 From Gary Marcus To All Panelists : then Sarah 18:45:29 From Gary Marcus To MONTREAL AI(privately) : should be able to get you out by then (though just to speak, not to be on panel) 18:47:09 From Jürgen of Arabia To All Panelists : Unfortunately I’ll have to leave you soon as it’s way past midnight over here … it was an honour and a pleasure meeting all of you! Jürgen 18:47:36 From francesca rossi To All Panelists : Bye Jurgen! Happy holidays! 18:47:52 From Dileep George To All Panelists : bye Jurgen, was great having you here! 18:49:04 From Gary Marcus To All Panelists : thanks so much Juergen for joining! 18:51:51 From Gary Marcus To All Panelists : sarah is next 18:52:20 From MONTREAL AI To All Panelists : Thank you So much Jurgen! 18:52:51 From Gary Marcus To All Panelists : Artur is next 18:55:35 From Gary Marcus To All Panelists : i may skip discussion on this round because we are fairly far behind; if anyone has a burning must-ask question let me know. after the discussion if any we will take a 5 min (exactly, not more) biobreak 18:55:36 From francesca rossi To All Panelists : Not just lack of GPus (computing power), also lack of data 18:55:47 From francesca rossi To All Panelists : Ok noi problem 18:57:10 From Jeff Clune To All Panelists : I think I have a fun burning question Gary 18:57:17 From Jeff Clune To All Panelists : I’ll type it here 18:58:03 From Gary Marcus To All Panelists : [sure type it] 19:01:19 From Gary Marcus To All Panelists : leaning towards no question just because of time issues, and because Q2 discussion anticipated a lot of Q3 19:02:29 From Gary Marcus To MONTREAL AI(privately) : Michele will be first speaker after break, because she has to go to another event 19:03:16 From Jeff Clune To All Panelists : I’m happy to ask this live I agree with everyone here that current systems are far from perfect, and we’ve seen tons of examples of their failures, but wouldn’t those here agree that ChatGPT has fewer of those flaws than GPT3 (and GPT3 fewer than GPT2, and GPT2 fewer than GPT1)? If we agree each successive GPT is more powerful, what solved/mitigated the problems was not adding more manual structure (or other manual path ideas), but instead scaling up the same playbook. So why should we now conclude we need the manual path, instead of more scale (or at least tweaks on the existing paradigm, rather than an alternative paradigm)? 19:04:08 From Dileep George To All Panelists : See my Hindenburg example. In 1920 someone could have made the same statement "Longer flights were achieved by building bigger balloons". 19:04:34 From Gary Marcus To All Panelists : ok Jeff will ask his question; dileep and I will give a very brief answers and the 5 min break. if anyone else wants a very brief answer to jeff, let me know 19:05:08 From Jeff Clune To All Panelists : Sure, but you could reverse that argument. The planes were working and improving…we just needed to keep scaling that paradigm! 19:05:21 From Gary Marcus To All Panelists : after 5 min break Michelle will lead off, slightly out of sequence. Members of Parliament! 🙂 19:05:23 From Dileep George To All Panelists : balloons existed before planes. 19:05:29 From Yejin Choi To All Panelists : Jeff’s question is great! I’m happy to answer as well if time permits 19:05:31 From Dileep George To All Panelists : move my question to 1900 19:05:40 From Jeff Clune To All Panelists : So, the question is whether deep learning is the plane or the Hindenburg. :-) My money’s on it being the jet. :-) 19:05:41 From Gary Marcus To All Panelists : ok dileep, yejin, gary, no follow for jeff sorry 19:07:32 From Dave Ferrucci To All Panelists : GPT is getting better at what it does. But given what it does — is that really what we want? I challenge group to characterize the intelligence we want. I would not trust even a human intelligence that could not convince me understood what it was talking about, regardless of how many times they may appear right. 19:08:34 From francesca rossi To All Panelists : I agree with David. We want/need AI machines that can explain and show understanding, otherwise they cannot be trusted. 19:08:35 From Artur Garcez To All Panelists : Scaling up is fine as long as we know what we are doing e.g. scaling up + XAI, but there is a limit to it (e.g. handling variables). 19:11:24 From francesca rossi To All Panelists : Trust is the key in this respect. 19:17:25 From Dave Ferrucci To All Panelists : I like doing thought experiments where I imagine my self trying to determine if I trust a human intelligence. What do I look for in their answers and explanations. In large part it is transparent meta-cognition — I am confident where I hear the annotate their responses with confidence measures in their belief, explanations that ground their beliefs in the basic assumptions they are taking on face value. I listen of explanations of how they are making inferences and why the chose one path over another - in just a couple of works — I look for humility and transparency. And then of course they need to be right ;) 19:18:10 From Jeff Clune To All Panelists : Me too! 19:19:22 From Gary Marcus To All Panelists : yejin is next, on ethics, then anja, the jeff; then the final panel 19:19:47 From francesca rossi To All Panelists : Am I not in the ethics panel? 19:20:06 From Gary Marcus To All Panelists : oops yes francesca also in ethics panel! 19:20:10 From francesca rossi To All Panelists : 🙂 19:22:14 From Gary Marcus To All Panelists : take that, Sundar (AI vs fire and electricity)/sentient fire! 19:23:43 From Dave Ferrucci To All Panelists : What is so interesting about AI and ethics is — as we develop standards for AI will we hold ourselves responsible to also hold human intelligence to the same standards 19:24:41 From sarahooker To All Panelists : Alas, it is past midnight here in the UK. Will have to exit early. But, thank you everyone for a fantastic set of discussions. Happy holidays everyone! 19:24:48 From Gary Marcus To All Panelists : dave, great if you could make that observation briefly in in discussion 19:24:54 From Gary Marcus To All Panelists : sarah thanks for coming,, fabulous talk 19:25:34 From francesca rossi To All Panelists : Definitely! Reasoning about AI ethics and making AI value-aligned is forcing us to think carefully about our values and what to expect also from humans. 19:25:45 From Dave Ferrucci To All Panelists : !! 19:26:20 From Michelle Rempel Garner To All Panelists : Thanks for the opportunity to participate. 19:29:33 From MONTREAL AI To All Panelists : You’re very welcome. Thank you so much! 19:29:49 From Dave Ferrucci To All Panelists : People make bad (expensive) decisions all the time that they cannot explain or justify — morally or otherwise. — If we end up holding AI to a higher standard, we are positioning AI vs Human in a very challenging way. Love this topic because if we can figure out a framework for this — wow — 19:30:14 From Dave Ferrucci To All Panelists : Thank you Yejin 19:31:02 From Gary Marcus To All Panelists : Jeff is next, then Francesca 19:31:06 From francesca rossi To All Panelists : ok 19:34:01 From Erik Brynjolfsson To MONTREAL AI(privately) : I know some folks who want to join but they get a message that “the event has ended”. Is there a link or youtube channel they can go to? 19:34:03 From Gary Marcus To All Panelists : probably will again keep discussion short, allowing david an observation and then maybe saving discussing until the final panel, (policy) which overlaps somewhat 19:34:48 From Gary Marcus To All Panelists : 755? 8? 19:35:01 From Gary Marcus To All Panelists : you are first in second panel 19:35:07 From Yejin Choi To All Panelists : I believe we need to put a higher standard on AI primarily due to (1) lack of accountability of AI mistakes and (2) the risk of systematicity of AI (meaning, unlike a human mistake that may be bound to one individual, AI mistake might have a much more systematic impact on the broader society, e.g., reinforcing unjust biases) 19:35:08 From Gary Marcus To All Panelists : oops sorry ignore 19:35:36 From Gary Marcus To All Panelists : yejin please make that remark briefly, too; so far queued for quick remarks are Dave and Yejin 19:35:43 From Yejin Choi To All Panelists : Thanks much Dave and Francesca for raising really intriguing points to discuss! 19:36:18 From President of MONTREAL AI To All Panelists : YouTube: https://youtu.be/JGiLz_Jx9uI 19:36:57 From MONTREAL AI To Erik Brynjolfsson(privately) : YouTube: https://youtu.be/JGiLz_Jx9uI 19:37:34 From Dave Ferrucci To All Panelists : I agree about humans take responsibility and are the ones ultimately accountable — but — do we want of the mistake (possibly disastrous for many — high cost if they have lots of power) — are we we hold their “intelligence” accountable? 19:38:08 From francesca rossi To All Panelists : Yejin: multi-disciplinarity is important, but also multi-stakeholder approaches. We need to work also with other stakeholders. 19:38:24 From Dave Ferrucci To All Panelists : (Sorry typos: *do we wait….*or do we hold their…) 19:38:41 From francesca rossi To All Panelists : I was wondering … 🙂 19:39:01 From Yejin Choi To All Panelists : Francesca: Agreed! That’s also what I meant to convey by pitching “value pluralism” — supporting multi stakeholders’ values 19:39:27 From francesca rossi To All Panelists : Humans are ultimately accountable for the final decision and action, but Ai needs to be accountable and explainable to human decision makers. 19:40:23 From Yejin Choi To All Panelists : Agreed again! (Even though we are suppose to disagree and debate when possible 😄) 19:40:40 From francesca rossi To All Panelists : Accountability is needed for both humans and AI, but to different stakeholders (society/regulators and human decision maker). 19:41:00 From francesca rossi To All Panelists : Ah ok, right 🙂 19:41:38 From francesca rossi To All Panelists : We are very late, I think 🙂 19:41:42 From francesca rossi To All Panelists : I will try to be very short 19:41:50 From Dave Ferrucci To All Panelists : Regarding systemic point on AI — agree again, if AI control has massive reach — but many individuals may have very broad influence as well - I don’t want to make the debate about holding human intelligence accountable — but my point, at its core, is about the challenge in judging an intelligence - morally, logically, factually, methodologically…this would be fabulous of course — but do we have clear expectations — we don 19:42:22 From Dave Ferrucci To All Panelists : ’T to set the expectations so high we can not achieve it 19:42:54 From Artur Garcez To All Panelists : Yes, accountability! https://arxiv.org/abs/2110.09232 19:43:18 From francesca rossi To All Panelists : Judging and intelligence is a good point, much work to do to define what it means 19:43:48 From francesca rossi To All Panelists : Auditing is a very hot topic now, mostly for regulatory purposes 19:43:57 From francesca rossi To All Panelists : But also important without regulations requiring it 19:44:03 From Dave Ferrucci To All Panelists : To the extent that application of AI forces this discussion I think its great 19:46:13 From francesca rossi To All Panelists : Learning them does not mean that they should not be explicit 19:46:15 From Gary Marcus To All Panelists : but even convolution is hardcoded and trumps pure learning… 19:46:48 From francesca rossi To All Panelists : Ethics should be embedded with neuro-symbolic approaches 19:46:51 From francesca rossi To All Panelists : Data and rules 19:47:18 From Dileep George To All Panelists : Collecting data and curating it is equivalent....just by "learning" it doesn't solve the fundamental problems with programming in values. 19:47:46 From francesca rossi To All Panelists : Right, learning from data would not achieve what we need 19:48:25 From francesca rossi To All Panelists : Ethical AI is wrong to use, in my view: Ai is not ethical or not. It may be value-aligned or not. 19:48:26 From Dave Ferrucci To All Panelists : Agreed — “learning” is programming by example — the notion it is not guided or designed is not quite right — it is mimicking the data given it 19:49:16 From Dave Ferrucci To All Panelists : The Model (if we dont want to call it program) is not coming from a trusted oracle 19:49:52 From Jeff Clune To All Panelists : What did you completely disagree with Gary? 19:50:58 From Yejin Choi To All Panelists : Jeff: I agree top-down ethical programing won’t work. Descriptive ethics based on examples can be empirically much more promising (which is basically what Delphi is based upon) 19:51:31 From Gary Marcus To All Panelists : brief remarks for david yejin, and then Erik will be next, then Kai-Fu and finally Angela 19:51:32 From Dave Ferrucci To All Panelists : Transparency is not enough when there is an asymmetrical risk in the outcome 19:51:41 From Gary Marcus To All Panelists : and then a very brief question for all who remain 19:52:10 From Yejin Choi To All Panelists : Suddenly this chat room is very 🔥 19:52:20 From Gary Marcus To All Panelists : 🙂 19:52:29 From Dave Ferrucci To All Panelists : Transparency is necessary not sure it is sufficient 19:53:35 From Yejin Choi To All Panelists : In general, it’s safe to assume nothing is sufficient (e.g., scale is necessary but not sufficient, regulation is necessary but not sufficient) 19:54:05 From Dave Ferrucci To All Panelists : Fair enough 19:54:30 From Dave Ferrucci To All Panelists : I think you mean no One thing 19:54:53 From Yejin Choi To All Panelists : Yes 🙂 19:54:54 From Dave Ferrucci To All Panelists : Unless saying that there is no satisfactory solution 19:54:56 From Dave Ferrucci To All Panelists : ok 19:55:22 From Jeff Clune To All Panelists : Gary (or anyone who heard him), can you repeat what you disagreed with in my talk? I missed it. 19:55:48 From Gary Marcus To All Panelists : I think Francesca’s argument for why we do need values comes closest to what i would say 19:56:06 From Jeff Clune To All Panelists : Yejin: sounds great. I want to learn more about Delphi. :-) 19:56:10 From Dave Ferrucci To All Panelists : So I should restate — Transparency after the fact does not remove the need to have a value standard agree upon ahead of time. 19:56:44 From Gary Marcus To All Panelists : i would add – learning has worked for somethings but not all; that eg convolution is vital and other priors might be too; that there is no good reason to exclude explicit precepts from ethical data input. and that complex systems of laws tend to be better than anarchy 19:56:46 From Jeff Clune To All Panelists : But I DO think we need values…it’s just a question on what is the best way to get AI’s values aligned with our own 19:56:59 From Yejin Choi To All Panelists : Jeff: https://arxiv.org/abs/2110.07574 19:57:08 From Gary Marcus To All Panelists : i don’t think we should trust them entirely experiential learning; we certainly don’t do that with our kids 19:57:12 From Jeff Clune To All Panelists : thanks 19:57:22 From Gary Marcus To All Panelists : but i don’t think we should air publicly now in interest of time 19:57:22 From Yejin Choi To All Panelists : Jeff: also, https://arxiv.org/abs/2205.01975 19:57:33 From Gary Marcus To All Panelists : francesca, we need you to wrap up 19:58:17 From Dave Ferrucci To All Panelists : The right value system will take some time to learn — humans still working on it — I give it 3-5 more years for us to find it ;) 20:00:07 From Gary Marcus To All Panelists : yejin will get last brief remark in this panel and then next to Erik, Kai-Fu, Angela 20:00:27 From Erik Brynjolfsson To All Panelists : ok 20:02:04 From Dave Ferrucci To All Panelists : Yes Hi — love too 20:02:55 From Angela Sheffield To All Panelists : AI gives us a chance at decision transparency. Which I think is good. And perhaps even worth it. 20:03:18 From Yejin Choi To All Panelists : angela: Oh I love that point! 20:03:47 From francesca rossi To All Panelists : I just see a picture 20:03:50 From Dave Ferrucci To All Panelists : Yes — Agree too - that is the beauty of it 20:03:52 From Yejin Choi To All Panelists : Erik: we don’t hear you and your screen is set on the photo 20:04:36 From Gary Marcus To All Panelists : K-F next 20:04:57 From Artur Garcez To All Panelists : A better metric is needed for XAI than human judgement. We argue for "fidelity" of the explanation w.r.t. the AI system. 20:05:31 From francesca rossi To All Panelists : Right! In my slides I called it faithfulness 20:05:38 From francesca rossi To All Panelists : We need faithful and true explanations 20:06:59 From francesca rossi To All Panelists : If AI only mimics humans, there would be no innovation 20:07:34 From francesca rossi To All Panelists : But generative AI could change this 20:09:24 From Dileep George To All Panelists : interesting dilemma...the complementing category, like being a writing assistant, are created by behavior cloning..ie, directly copying humans. It's just that the copying doesn't copy how the human does it, it copies what the human does, but imperfectly, so we can deploy them only as augmentation. 20:09:32 From Dileep George To All Panelists : (dilemma/paradox) 20:09:49 From Gary Marcus To MONTREAL AI(privately) : do save the chat for sharing among the panel 20:10:39 From MONTREAL AI To Gary Marcus(privately) : I think everyone recording will get a copy of the chat. 20:11:26 From Dave Ferrucci To All Panelists : Is that true? Are you admitting that coping a systems output can never be as good as coping “how” is does what it does? Aren’t machines doing some jobs categorically better? 20:11:29 From MONTREAL AI To Gary Marcus(privately) : But, I should get the chat with high probability. 20:12:22 From francesca rossi To All Panelists : I think machine can be better than humans, especially when a lot of data needs to be considered to make a decision 20:12:30 From Dileep George To All Panelists : yes, definitely some jobs. But imitation learning in robotics hasn't worked out that well, for example. 20:12:32 From Gary Marcus To All Panelists : angela is next 20:13:43 From Gary Marcus To All Panelists : After Angela, wecan take a few questions, then lighting round and very quick wraps from me and then closing from Vince 20:13:54 From Dave Ferrucci To All Panelists : Dileep: Ok. — that is an example, one would argue also that learning for the output is never as good — there are other examples. 20:14:01 From Gary Marcus To All Panelists : lightning round will be: If you could give students (undergrads, grad students, postdocs), one piece of advice—e.g., on what AI question most needs researching or on how to prepare for a world in which AI becomes increasingly central to our existence—what would that advice be? 20:14:09 From francesca rossi To All Panelists : ok 20:15:50 From Dave Ferrucci To All Panelists : Ironic, that artistic creativity falls pretty to AI faster that more “basic” functions. 20:15:59 From Dave Ferrucci To All Panelists : *falls prey 20:16:16 From Gary Marcus To All Panelists : heh heh not pretty 20:16:26 From Dave Ferrucci To All Panelists : Well— maybe pretty ;) 20:16:47 From francesca rossi To All Panelists : One (even a human) can be creative and not be able to do some basic tasks 20:16:53 From Gary Marcus To All Panelists : the art, but not the effect on commercial artists 20:17:15 From Dave Ferrucci To All Panelists : Gary: yes big economic issue there 20:17:55 From Dave Ferrucci To All Panelists : Francesca: true — but I guess I would have guessed AI would learn how to walk before write a novel ;) 20:19:29 From francesca rossi To All Panelists : Walk is an example of a task that is easy for a human but difficult for a machine, Writing a novel is the opposite. We tend to antropomorphize AI too much. 20:19:55 From Dave Ferrucci To All Panelists : Agree. 20:20:14 From Dave Ferrucci To All Panelists : This sounds terrible — is it reason enough for governments to control the application of generative AI for these purpose? 20:20:46 From francesca rossi To All Panelists : The AU AI Act will most probably include foundation models in some way 20:21:28 From francesca rossi To All Panelists : Some amendment proposals are very strong: they want FMs to comply to the same obligations as high risk Ai systems, even if they are used for non-high-risk applications., 20:22:12 From francesca rossi To All Panelists : AU —> EU 20:22:39 From Dave Ferrucci To All Panelists : Sobering — 20:23:29 From Dave Ferrucci To All Panelists : Kai-Fu: Are you optimistic there are things we can do — 20:23:34 From Artur Garcez To All Panelists : So I don't disagree with Kai-Fu after all - disinformation IS the biggest threat 20:23:41 From francesca rossi To All Panelists : There is already work to make FMs more trustworthy, also at IBM 20:23:41 From Gary Marcus To All Panelists : i agree 20:23:44 From Dave Ferrucci To All Panelists : agree 20:24:09 From Dave Ferrucci To All Panelists : But how controls the people making them “trustworthy”? 20:24:20 From Dave Ferrucci To All Panelists : *but who and how I guess 20:24:30 From Artur Garcez To All Panelists : AI teaching at schools! 20:24:35 From francesca rossi To All Panelists : Regulations and standards and audits, they are coming soon 20:24:45 From francesca rossi To All Panelists : Also education, right AG! 20:25:10 From Dave Ferrucci To All Panelists : One answer is regulating how AI works — which goes back to earlier discussion 20:25:40 From Dave Ferrucci To All Panelists : Do we look under the hood and regulate the technical approach 20:26:06 From Gary Marcus To All Panelists : think we will skip to discussion, go straight to lighinging round, but feel free to interpret my query liberally; 20:26:20 From Gary Marcus To All Panelists : ie give one brief last thought, whatever you like 20:26:45 From francesca rossi To All Panelists : Regulate the technology is not ideal, also not future-proof 20:27:02 From francesca rossi To All Panelists : I would think that regulating the applications of the technology is better 20:27:08 From francesca rossi To All Panelists : And more if high risk applications 20:28:11 From francesca rossi To All Panelists : Trustworthy Ai playbooks are currently used, as well as risk assessment processes 20:28:34 From francesca rossi To All Panelists : They are being adapted to address also generative Ai issues 20:28:41 From Gary Marcus To All Panelists : orderL Erik, Dave, Yejin, Anja, Dileep, Artur, K-F, Jeff, Francesca 20:28:47 From francesca rossi To All Panelists : thanks! 20:30:08 From Erik Brynjolfsson To All Panelists : What is the exact question for lightning round? 20:30:33 From Gary Marcus To All Panelists : one piece of advice for students (pasted exact wording above) 20:30:46 From Erik Brynjolfsson To All Panelists : 👍 20:30:47 From Dave Ferrucci To All Panelists : I am convinced more than I when I came in that the AI in the AI matters — just like I would learn to trust a human based on HOW they think and HOW they learn, I would build trust in a AI. — not based on a just good outputs. 20:31:26 From francesca rossi To All Panelists : Yes, but we don’t know the details on how humans would solve a problem 20:31:31 From MONTREAL AI To All Panelists : At the end, I’ll terminate the live streams, but not the zoom 🙂 20:31:36 From francesca rossi To All Panelists : I trust human judgement 20:31:39 From Dave Ferrucci To All Panelists : That is why I probe a lot !!! ;) 20:31:46 From francesca rossi To All Panelists : 🙂 20:32:05 From francesca rossi To All Panelists : Erik, I think the question is this on: If you could give students (undergrads, grad students, postdocs), one piece of advice—e.g., on what AI question most needs researching or on how to prepare for a world in which AI becomes increasingly central to our existence—what would that advice be? 20:34:05 From Gary Marcus To All Panelists : repeating the lightning round running order: Erik, Dave, Yejin, Anja, Dileep, Artur, K-F, Jeff, Francesca 20:36:47 From Gary Marcus To All Panelists : anja camera on if still here? 20:40:43 From Dave Ferrucci To All Panelists : Great Event! Throughly enjoyed it. All great talks. Thank you so much Gary for organizing and for inviting me. All the best to all. 20:42:17 From Yejin Choi To All Panelists : +1 this was an awesome gathering! Thanks much Gary and Vincent for organizing this! And thank you all for many thought provoking questions and discussions 20:42:38 From Gary Marcus To All Panelists : i will have a few brief closing words and the vince and even briefer closing 20:42:47 From ANTHKA To All Panelists : 🙏🏻Thank you all! Great meeting and learning from all of you, albeit virtually:-)! 20:42:49 From Dave Ferrucci To All Panelists : I didn’t address where I would focus as an AI student — but I agree with many of points already made around ensuring the AI is compatible and complimentary to human intelligence and human life. 20:43:18 From Erik Brynjolfsson To All Panelists : Thanks Gary, Vincent and fellow panelists. Terrific event - I learned so much! 20:44:41 From Jeff Clune To MONTREAL AI(privately) : Shoot, my internet faded at the worst moment there at the end….were you able to hear my comment? 20:45:34 From francesca rossi To All Panelists : Thanks all! 20:46:06 From Dileep George To All Panelists : Thank you all! It was great fun! 20:46:13 From Angela Sheffield To All Panelists : Learned so much from all of you! Thank you Gary and Vincent and all! 20:46:32 From Artur Garcez To All Panelists : Thank you all, Gary, and Vince! 20:46:35 From Jeff Clune To All Panelists : Thanks everyone. I enjoyed learning from you and hearing your thoughts. 20:46:51 From Gary Marcus To All Panelists : this was really amazing! it’s a cliche to thank you all at the end, but, … wow! 20:47:04 From francesca rossi To All Panelists : Thanks Gary for inviting me again! 20:47:19 From Gary Marcus To All Panelists : vince will close the streams but we can hang on this call for a few minutes 20:47:22 From Jeff Clune To All Panelists : And a huge thanks to Gary and Vincent for organizing and for the invitation