اشارات ولايي ١ قرآن
Article
Published: 17 March 2021
An autonomous debating system
Noam Slonim, Yonatan Bilu, Carlos Alzate, Roy Bar-Haim, Ben Bogin, Francesca Bonin, Leshem Choshen, Edo Cohen-Karlik, Lena Dankin, Lilach Edelstein, Liat Ein-Dor, Roni Friedman-Melamed, Assaf Gavron, Ariel Gera, Martin Gleize, Shai Gretz, Dan Gutfreund, Alon Halfon, Daniel Hershcovich, Ron Hoory, Yufang Hou, Shay Hummel, Michal Jacovi, Charles Jochim, …Ranit Aharonov Show authors
Nature volume 591, pages379–384 (2021)Cite this article
...
Abstract
Artificial intelligence (AI) is defined as the ability of machines to perform tasks that are usually associated with intelligent beings. Argument and debate are fundamental capabilities of human intelligence, essential for a wide range of human activities, and common to all human societies. The development of computational argumentation technologies is therefore an important emerging discipline in AI research1. Here we present Project Debater, an autonomous debating system that can engage in a competitive debate with humans. We provide a complete description of the system’s architecture, a thorough and systematic evaluation of its operation across a wide range of debate topics, and a detailed account of the system’s performance in its public debut against three expert human debaters. We also highlight the fundamental differences between debating with humans as opposed to challenging humans in game competitions, the latter being the focus of classical ‘grand challenges’ pursued by the AI research community over the past few decades. We suggest that such challenges lie in the ‘comfort zone’ of AI, whereas debating with humans lies in a different territory, in which humans still prevail, and for which novel paradigms are required to make substantial progress.
...
https://www.nature.com/articles/s41586-021-03215-w
...
اشارات ولايي ١ قرآن
https://www.nature.com/articles/d41586-023-04047-6
NEWS
21 December 2023
Clarification 23 December 2023
AI consciousness: scientists say we urgently need answers
Researchers call for more funding to study the boundary between conscious and unconscious systems.
By Mariana Lenharo
Twitter Facebook Email
A robot called Sophia being tested at Hanson Robotics, a robotics and artificial intelligence company in Hong Kong.
A standard method to assess whether machines are conscious has not yet been devised.Credit: Peter Parks/AFP via Getty
Could artificial intelligence (AI) systems become conscious? A trio of consciousness scientists say that, at the moment, no one knows — and they are expressing concern about the lack of inquiry into the question.
In comments to the United Nations, three leaders of the Association for Mathematical Consciousness Science (AMCS) call for more funding to support research on consciousness and AI. They say that scientific investigations of the boundaries between conscious and unconscious systems are urgently needed, and they cite ethical, legal and safety issues that make it crucial to understand AI consciousness. For example, if AI develops consciousness, should people be allowed to simply switch it off after use?
Such concerns have been mostly absent from recent discussions about AI safety, such as the high-profile AI Safety Summit in the United Kingdom, says AMCS board member Jonathan Mason, a mathematician based in Oxford, UK, and one of the authors of the comments. Nor did US President Joe Biden’s executive order seeking responsible development of AI technology address issues raised by conscious AI systems, Mason notes.
“With everything that’s going on in AI, inevitably there’s going to be other adjacent areas of science which are going to need to catch up,” Mason says. Consciousness is one of them.
The other authors of the comments were AMCS president Lenore Blum, a theoretical computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania, and board chair Johannes Kleiner, a mathematician studying consciousness at the Ludwig Maximilian University of Munich in Germany.
Not science fiction
اشارات ولايي ١ قرآن
NEWS 21 December 2023 Clarification 23 December 2023 AI consciousness: scientists say we urgently
A 2022 survey given to active researchers in the natural-language-processing community shows the stark divisions in this debate. One survey item asked whether the respondent agreed with the following statement about whether LLMs could ever, in principle, understand language: “Some generative model [i.e., language model] trained only on text, given enough data and computational resources, could understand natural language in some nontrivial sense.” Of 480 people responding, essentially half (51%) agreed, and the other half (49%) disagreed (26).
اشارات ولايي ١ قرآن
1. قرآن کریم و هوش مصنوعی
2. چه سئوالاتی را برای درک حقایق باید بپرسیم؟
3. سئوالات ساده و کوتاه خوب هستند؟
4. کدام سئوالات تأثیر بیشتری در درک دارند؟
5. بدیهیات و اصول در هر موضوع کدامند؟
6. رابطه متقابل قرآن وهوش مصنوعی چیست؟
7. آیا قرآن برای درک بهتر علوم خوب است؟
8. درک علوم شامل کاربرد صحیح آنها هم هست؟
9. آیا علوم در درک بهتر قرآن مؤثّرند؟
10. آیا فنون در درک بهتر قرآن تأثیر دارند؟
11. آیا نگرشها در درک قرآن مفید هستند؟
12. تعاریف برای هوش کدامند؟
13. غریزه هم نوعی هوش است؟
14. تعقّل و تفکّر با هوش ربط دارند؟
15. انواع و درجات و نهایت هوش چیست؟
16. انتظار از هوش مصنوعی چه باید باشد؟
17. علوم مرتبط با هوش مصنوعی کدامند؟
18. مهمترین ابزارها در هوش مصنوعی کدامند؟
19. مبانی هوش مصنوعی چیست؟
20. راستی! هوش مصنوعی چیست؟
اشارات ولايي ١ قرآن
1. قرآن کریم و هوش مصنوعی 2. چه سئوالاتی را برای درک حقایق باید بپرسیم؟ 3. سئوالات ساده و کوتاه خو
For example, two standard benchmarks for assessing LLMs are the General Language Understanding Evaluation (GLUE) (27) and its successor (SuperGLUE) (28), which include large-scale datasets with tasks such as “textual entailment” (given two sentences, can the meaning of the second be inferred from the first?), “words in context” (does a given word have the same meaning in two different sentences?), and yes/no question answering, among others.