I’m that can recursively improve itself to the

I’m a student engineer with some basic experience in machine learning, and though the results ofmachine learning have been becoming more impressive and general, I’ve never really seen wherepeople are coming from when they see strong superintelligence just around the corner, especiallythe kind that can recursively improve itself to the point where intelligence vastly increases in thespace of a few hours or days. So I choose this book rather than “Daemon” in order to answer asimple question: “Why are so many intelligent people scared of a near-term existential threatfrom AI, and especially why should I believe that AI takeoff will be incredibly fast?”In your opinion, what is the most interesting thought in the book?Within all the ideas, one of the most stimulating thought is without any contest the existential riskfrom artificial general intelligence. Indeed, Bostrom sounds very pessimist in the book and statesthat superintelligence would be a hard takeoff once the artificial intelligence rises quickly andreaches the human intelligence level. And this will lead to a superior form of intelligent mind andwould gain power over humans which would present a huge threat to the human race.This made me realize the danger of superintelligence which would be a weapon against us if wedon’t consider this as a danger. But how would we deal with such a ‘cleverer than us’ AI. Whatwould we ask it to do? How would we motivate it? How would we control it? And, bearing in mindit is more intelligent than us, how would we prevent it taking over the world or subverting thetasks we give it to its own ends? It is truly fascinating concept, explored in great depth by Bostromwho thinks that the development of super artificial intelligence may well happen and if we don’tthink through the implications and how we would deal with it, we could well be stuffed as aspecies.The main argument of Bostrom was thus to warn us about the dangers, but also to outline in somedetails how this risk could or should be mitigated, restraining the scope or the purpose of ahypothetical super-brain and this is what he calls “the AI control problem”, which is at the core ofhis reasoning and which is a surprisingly difficult one. And a very interesting part in the book iswhen he presents various ways for artificial intelligence to surpass the physical bonds of thehardware in which it has been developed and which would allow the artificial intelligence toquickly rise once it reaches the human level intelligence.The author imagines several scenarios. As an example, the super-intelligent system could use hishacking ability to take over control of the actuators and sensors in the automated labs. And inorder to push humans to collaborate, it would use its social manipulation abilities. And of course,one scenario could be based on the nanotechnology and biotechnology. In instance, the AI wouldtake off on its own if it reaches the level where it has the ability to replicate itself, ensure itsmaintenance, its development and its production.However, for the purposes of risk, I think that the “superintelligence” does not need to be betterthan humans in all aspects. It just needs to be better than humans at, for example, convincinghumans to follow and military tactics or at building mechanisms. In a first phase, lasting for adecade or more, these AIs will be under the control of their designers and their users. They willhave a minimum of autonomous intelligence allowing them to take initiatives, but these will bedesigned to meet the requirements of these same designers and users. The machine being betterin everything, and therefore the machine being better at discovering and wishing to fulfill human goals, are not necessary for it being a threat. It is very possible that we may design an extremelygood learner and give it a goal that is slightly off, and catastrophe ensues. And I think that’s theidea behind the book: The pathway of the technology should be based on these values.Is the prospect of achieving this type of Superintelligence realistic?Bostrom is first and foremost a philosophy professor, and his book is not so much about theengineering or economic aspects that we could foresee as regards strong artificial intelligence.The main concern is the ethical problems that the development of a general super-intelligentmachine, far surpassing the abilities of the human brain, might pose to us as humans but is thissuperintelligence realizable in the first place?Having a technical background, I’m not sure there was a single section of the book where I didn’thave a reaction ranging from “wait, how do you know that’s true?” to “that’s completely wrongand anyone with a modicum of familiarity with the field you’re talking about would know that”.Essentially, the argument goes like this: Bostrom introduces some idea, explains in vaguelanguage what he means by it and traces out how it might be true, and then moves on. In the nextsection, he takes all of the ideas introduced in the previous sections as givens and as mostly blackboxes, in the sense that the old ideas are brought up to justify new claims without ever invokingany of the particular evidence for or structure of the old idea. The sense is of someone trying tobuild a tower, straight up. The fact that this particular tower is really a wobbly pile of blocks, withmany of the higher up ones actually resting on the builder’s arm and not really on the previousones at all, is almost irrelevant. There is no broad consideration of the available evidence, nodemonstration of why the things we’ve seen imply the specific things. I don’t want to be veryradical but while reading the book, I had the feeling that Bostrom suggests no serious engagementwith alternative explanations/predictions, no cycling between big-picture overviews and in-detailanalyses. There is just a stack of vague plausibility and vague conceptual frameworks toaccommodate them. A compelling presentation is a lot more like clearing away fog to note somerocky formations, then pulling back a bit to see they’re all connected, then zooming back in toclear away the connected areas, and so on and so forth until a broad mountain is revealed.Unfortunately, I leave the book with this question largely unanswered. Though in principle I can’tthink of anything that prevents the formation of some forms of superintelligence, everything Iknow about software development makes me think that any progress will be slow and gradual,occasionally punctuated with a new trick or two that allows for somewhat faster increases in somedomains. So on the whole, I came away from this book with the uncomfortable but unshakeablenotion. Though Bostrom used much of the language of computer science correctly, any of hisextrapolations from very basic, high-level understandings of these concepts seemed franklyoversimplified and unconvincing and I remain pretty unconvinced of AI as a relatively near-termexistential threat, though I think there’s some good stuff in here that could use a wider audience.And being more thoughtful and careful with software systems is always a cause I can get behind. Ijust wish some more of the gaps got filled in, and I could justifiably shake my suspicion thatBostrom doesn’t really know that much about the design and implementation of large-scalesoftware systems.

x

Hi!
I'm Shane!

Would you like to get a custom essay? How about receiving a customized one?

Check it out