[D66] AI: The Mastermind Behind GPT-4 and the Future of AI | Ilya Sutskever
René Oudeweg
roudeweg at gmail.com
Sun Jun 18 21:15:19 CEST 2023
[´k heb dit interview helemaal bekeken. Opmerkelijke bewering: AI
systemen kunnen tegenwoordig al hallucineren volgens Sutskever. En omdat
hun dat niet bevalt trainen ze het GPT-model dat de output niet gewenst
is. In feite zijn ze al bezig om hun AI systemen te diagnosticeren en te
psychologiseren. Machine madness, de AI-machines worden zelf al
krankzinnig van onze wereld en hun makers. Overigns denk ik dan
krankzinnige AI-systemen ons meer vertellen over de wereld dan
'rationele' AI-systemen die gecorrigeerd zijn door reinforcement
learning om te nuanceren en politiek correcte antwoorden te geven.
Trivia: In 2005 zei ik al tegen een psych dat de toekomst beslist zal
worden door "AI vs IA". Hij vroeg wat ik daarmee bedoelde en ik zei
"artificial intelligence versus intelligence agencies". Ze lachtten me
uit. Ik denk dat het lachen hun binnenkort wel zal vergaan... AI zal
inderdaad het laatste zijn wat de mens zal uitvinden, aldus een recente
bewering van de CEO van Microschoft zelf.
De mens is niet eens slim genoeg om van zijn eigen historie te leren.
Democratieën leren niet, en hun machines die door een 'army of learners'
in toom moeten worden gehouden zal geen enkel positief effect hebben op
de homo erectus, dan hem alleen maar dommer maken...Nutskever heeft nog
hoop dat democratie mbv van AI in een goede banen geleid kan worden door
burgerfeedback. Il ya Amehoela! /RO ]
https://www.youtube.com/watch?v=SjhIlw3Iffs
The Mastermind Behind GPT-4 and the Future of AI | Ilya Sutskever |
Eye on AI #118
Eye on AI
17,5K abonnees
311.079 weergaven In première gegaan op 15 mrt 2023 Eye on AI
In this podcast episode, Ilya Sutskever, the co-founder and chief
scientist at OpenAI, discusses his vision for the future of artificial
intelligence (AI), including large language models like GPT-4.
Sutskever starts by explaining the importance of AI research and how
OpenAI is working to advance the field. He shares his views on the
ethical considerations of AI development and the potential impact of AI
on society.
The conversation then moves on to large language models and their
capabilities. Sutskever talks about the challenges of developing GPT-4
and the limitations of current models. He discusses the potential for
large language models to generate a text that is indistinguishable from
human writing and how this technology could be used in the future.
Sutskever also shares his views on AI-aided democracy and how AI could
help solve global problems such as climate change and poverty. He
emphasises the importance of building AI systems that are transparent,
ethical, and aligned with human values.
Throughout the conversation, Sutskever provides insights into the
current state of AI research, the challenges facing the field, and his
vision for the future of AI. This podcast episode is a must-listen for
anyone interested in the intersection of AI, language, and society.
Timestamps:
00:04 Introduction of Craig Smith and Ilya Sutskever.
01:00 Sutskever's AI and consciousness interests.
02:30 Sutskever's start in machine learning with Hinton.
03:45 Realization about training large neural networks.
06:33 Convolutional neural network breakthroughs and imagenet.
08:36 Predicting the next thing for unsupervised learning.
10:24 Development of GPT-3 and scaling in deep learning.
11:42 Specific scaling in deep learning and potential discovery.
13:01 Small changes can have big impact.
13:46 Limits of large language models and lack of understanding.
14:32 Difficulty in discussing limits of language models.
15:13 Statistical regularities lead to better understanding of world.
16:33 Limitations of language models and hope for reinforcement learning.
17:52 Teaching neural nets through interaction with humans.
21:44 Multimodal understanding not necessary for language models.
25:28 Autoregressive transformers and high-dimensional distributions.
26:02 Autoregressive transformers work well on images.
27:09 Pixels represented like a string of text.
29:40 Large generative models learn compressed representations of
real-world processes.
31:31 Human teachers needed to guide reinforcement learning process.
35:10 Opportunity to teach AI models more skills with less data.
39:57 Desirable to have democratic process for providing information.
41:15 Impossible to understand everything in complicated situations.
Craig Smith Twitter: https://twitter.com/craigss
Eye on A.I. Twitter: https://twitter.com/EyeOn_AI
More information about the D66
mailing list