Do you worry about the singularity?
Displaying poll results.10974 total votes.
Most Votes
- What's the highest dollar price will Bitcoin reach in 2024? Posted on February 28th, 2024 | 8481 votes
- Will ByteDance be forced to divest TikTok Posted on March 20th, 2024 | 7820 votes
Most Comments
- What's the highest dollar price will Bitcoin reach in 2024? Posted on March 20th, 2024 | 68 comments
- Will ByteDance be forced to divest TikTok Posted on March 20th, 2024 | 20 comments
Singularity? (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Woosh.
Google "The Singularity"
Then google "a Singularity"
Totally different things.
Re: (Score:2)
Re: (Score:2)
I wouldn't say "woosh" over misunderstanding this. It's a stupid application of a term with an already established meaning. When I first saw the poll, I didn't even bother voting or reading the comments because it is a asinine as i thought it would be. Might as well be the basis for a script from Sony Pictures.
Re: (Score:2)
Busy programming it (Score:2)
It's really hard to make protein based biocomputers, you know.
You should be more worried about global warming, 2/3 of this year's olive oil crop is failing, and a lot of other crops are in danger.
Re: (Score:3)
It's really hard to make protein based biocomputers, you know.
Really? I always thought it was kind of easy, but I'll leave it to your parents to have the "Birds and the Bees" talk with you.
Re: (Score:2)
TOO easy. Idiots do it accidentally every day.
Re: (Score:2)
Considering your first real attempt is only made after a couple decades of preparation, I'd say it is more difficult than you think.
Re: (Score:2)
Re: (Score:3)
I have the ability to worry about multiple things. Especially if they are so extremely different.
Worried about society (Score:5, Interesting)
As automation increases we'll need less labor to get stuff done. What's going to happen to society when a significant percentage of people can't get jobs, because machines do everything they can do better and cheaper?
I know that increased productivity should in theory make life better for everybody, in practice wealth has been increasingly concentrated over the last few decades.
Re: (Score:3)
What do you mean by "concentrated"? Looks spread. (Score:2)
I know that increased productivity should in theory make life better for everybody, in practice wealth has been increasingly concentrated over the last few decades.
Having been to Africa and other really poor parts of the world, what you think of as "concentrated" looks like quality of living spread remarkably well over first world countries. So what if some people have an absurd amount of money?
In fact if you think about it, the stuff most people enjoy day to day, you aren't going to have a much better exp
Re: (Score:2)
Most people need to be kept busy or they will end up causing trouble. They will start causing trouble simply out of boredom.
Plus there's just the issue of atrophy. Both the body and mind need to be kept distracted or they degrade.
Re:Worried about society (Score:5, Insightful)
then we need to instill our cultural with values that education is good and worthwhile, and research or art is a worthwhile pursuit.
or that serving others can be fulfilling to ourselves.
that people can work toward the betterment of all of society, rather than jsut enriching themselves.
which basically means that the transition from a scarcity based society to a post-scarcity society where people are free to persue whatever endeavors they wish without need to worry about food, or home, or clothing, will never happen as long as the current conservative economic platform exists because its entire basis rests on the underlying assumption of scarcity. only a more left oriented economic policy can make the transtion to post scarcity, because only it doesnt reflect a "dog-eat-dog, might makes right, i got mine F you" mind set. that mind set only has any validity in a scarcity based society. a right based economic policy that transitions to post-scarcity will necessarily consist of artificial scarcity, where groups of people are kept from enjoying hte fruits of all that automation, and must be "kept busy for their own good" while only certain groups of people enjoy having all their needs and wants already accounted for....in fact that really sounds familiar.
Re: (Score:2)
Do you have some kind of problem with trouble?
If people didn't get into trouble, we wouldn't even be talking about robots, yet. We'd be posting on Slashdot, stuff like "sucks that I didn't find enough berries today, and the area is running out of meaty squirrels, so I'll probably be moving along soon." You think you want to be a factory or farm slave for the rest of your life, but you don't even get to do that, until after you've already figured out that you don't want it.
Re:Worried about society (Score:4, Insightful)
That's a great plan if you have wealth and use it to buy the bots and other resources required to make stuff or do stuff that you can use yourself or sell to others.
A lot of people don't have that wealth. I don't see an easy way for them to convince those who do have wealth to share it (especially given the rise of the Tea-Party which is based on people wanting to keep what they've earned).
Re: (Score:2)
government is actually pretty efficient.
in fact it demonstrates the same efficiency rates as most large businesses, and in some areas is even more fficient.
this is due to most of the same benefits any large economic entity enjoys from economies of scale and leverage.
as for choice...would you choose not to have publicly maintained roads, libraries, schools, national defense, research, etc?
would you prefer a Syndicate type world where defense, roads, and education are privately industrialized commodities?
(if
Re: (Score:3)
The Tea Party is not, IMO, about not sharing. It's about not having your money taken by the government and used to sustain a government apparatus that squanders it inefficiently.
You can share all you want and you are encouraged to spend that money in exchange for goods and services that will make you happy. The point is choice.
About choosing how to spend it, instead of having the government choose for you...
No, that's incorrect. The Tea Party is mostly composed of people who resent that the gummint spends any money *at all* on "those people". You know, "those" people.
Oh, all right, I'll say it - the damn blacks and highspannics!
Keep in mind that these are also the people who actually demand, "keep your government hands off my Medicare".
Re: (Score:2)
I see what you did there.
What brought this on? (Score:3)
Re: (Score:3)
It was probably because of what this guy [xkcd.com] said. [bbc.com]
Re: (Score:2)
The funny thing is that if artificial intelligence does kill us, it won't be because of malevolence or a warped sense of justice like most fiction portrays it, it will be because someone made a programming error and/or didn't include a proper failsafe. It will kill us because we programmed it to kill us. I don't think AI will ever reach the point of doing something I would consider "thinking" or "reasoning." At least not through software and hardware development the way we understand it right now.
Take the c
Re: (Score:2)
Should I be somehow worried because f(x)=1/x is not defined for x=0 ??
I guess, I'll leave this to Betteridge's law.
My Take (Score:2)
As someone who has a recent graduate degree in computer science, and has a fair amount of experience in applying AI techniques, let me offer my take on the matter. What is referred to as "artificial intelligence" today will never, ever result in an agent that we could consider intelligent, by the standards of human intelligence. Nearly all AI research is focused on solving very narrow problems. To give one example, Watson is no more than a sophisticated search engine. It's barely more intelligent than Googl
Re:My Take (Score:5, Insightful)
"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." -- Edsger Dijkstra
The fact that computers don't "think" doesn't mean that they won't get better than us at doing the things that require us to think.
Re: (Score:2)
Yes, you understand exactly!
But climbing higher in the tree will never get you to the moon. Programs that do better than humans in one particular area will not develop to the point that they have general intelligence. They'll be idiot savants, great at one specific thing to the point of being better than any human (like playing chess or Jeopardy, driving a car, performing surgery, or even writing a symphony), but a complete idiot at everything else.
I also think these programs will never get as good as the b
Re: (Score:3)
So basically a PhD then?
Re: (Score:2)
Re: (Score:2)
I think the situation is worse than that. Not only do we not have anything approaching a decent understanding of how actual intelligence works, it's probably way too complicated for a human to understand. Perhaps we could construct a computer system that could in some sense "understand" how the brain works, and it could design a better brain. That better brain could in turn build a better computer system, ad singularity. Actually I never thought of that approach before.
But does an agent have to understand h
Re: (Score:2)
In Soviet Russia... (Score:2)
The real worry (Score:4, Funny)
The thing everyone should really be worried about is the Cowboy Nealarity.
Corporations not Computers are the singularity (Score:4, Interesting)
If a soulless, inhumane, machine which with no conscious is the singularity you fear then what else is a modern corporation?
The mantra of "in the interests of our shareholders", is too often quoted when a large money making entity is caught doing something morally questionable.
Human's (non-sociopathic ones) have empathy to tone down their ambitions and value systems.
Corporations are entities and could be compared to organisms like a bee hive or ant colony.
A Corp has ambitions and value systems driven by making money with the empathy left to individuals within the "hive" but with the added problem of individualistic ambition which the insects don't suffer from.
So no, the machines getting smart doesn't frighten me but the greed of corporations does especially now they are wielding more control over governments than ever before.
The meta-Turing test (Score:2)
The meta-Turning test counts a thing as intelligent if it seeks to devise and apply Turing tests to objects of its own creation.
-- Lew Mammel, Jr.
I'll start worrying about the singularity when an AI passes the meta-Turing test.
Pure fantasy. (Score:2)
We have excellent AI's walking around all the time, (many people I would classify as not really intelligent, though it kinda looks like they are), and I don't see them designing better brains for themselves.
If computers achieve intelligence, they'll be confused, inclined to want to conserve energy, (be lazy) and generally be beset by all the same problems biological intelligence systems have to struggle with. Nano-tech won't be any faster or more advanced than existing organics. If we ever see earth-con
Re: (Score:2)
If computers achieve intelligence, they'll be confused, inclined to want to conserve energy, (be lazy) and generally be beset by all the same problems biological intelligence systems have to struggle with
I think that you are making an assumption here that the "generation one" will have the same constraints programmed in that evolution has programmed into us. There is no reason to assume that they would be inclined to be lazy and want to conserve energy. In fact even biological intelligences are only programmed to conserve energy when it doesn't matter - children play to learn, people home skills in sport - and very few people would conserve energy rather than fucking.
Re: (Score:2)
Play and Fucking are required for survival, so they're built in. But after play and sex, there is always rest, and there's a reason for that.
And they're also part of the evolutionary selection process; building smarter brains the millions-of-years way.
I'm just saying that being alive and self-aware isn't easy. Just because you have it, doesn't guarantee instant godhood. All it gives you is a distant long shot at maybe one day wondering what "art is".
Humans are a pretty good design, and most of us can't tell your basic truth from your basic lie, let alone how to improve our own minds.
And anyway, "Generation One" may perhaps lead to "Generation Twenty", (Though, why? What motivation is there?), but assuming it does, at what point will that generation say, "You know.., to hell with this. It's easier to pretend that I'm already the top banana and feel special about myself that way than to continually design and build these whipsmart punk kids. The retirement plan around here sucks! But what if... Hold on.., I could build *slaves* so I don't have to do all my thinking myself. It's hard, after all to exercise the brain."
Self-awareness is filled with traps, and computers would be babes in the woods, subject to the same lessons as any other awareness must be.
Like HAL in 2001; he became racked with self-doubt and jealously, could not co-exist and share power/responsibility. Sound like a lot of people i know...
Wirth's law protects us from singularity (Score:3, Interesting)
There will never be enough processing power to create powerful enough AI that singularity will happen.
Wirth's law states that software gets slower due bloatness faster than Moore's law allows hardware to get faster.
We have moved from handcoded assemly and simple binary data format format to javascript which is either interpreted very slowly, or JIT-compiled into slightly faster code which is still 10 times slower than assembly, and XML or JSON-based data formats (which require a LOT of parsing). Now other languages are being complied to javascript, which adds another slowness layer on top of it.
So, it we invented a super-powerful AI that would be capable of creating truely smart code, it would spend it's time creating even more bloaty abstraction layers on top of each others, instead of creating anything that would be truely more intelligent.
Re: (Score:2, Interesting)
Bloaty abstraction layers seem to be a pretty important part of consciousness though so there's that.
Re: (Score:2)
no worries, that's just the eye candy end using those fad-of-the-day things, your money and insurance policies on the back end are handled by wares written with much older and tried and true code.
Re: (Score:2)
While this is arguably true for software that directly faces a human being, embedded/specialist applications are getting faster and more powerful. I work on a massively parallel ASIC and it benefits from every iteration of die shrink with clock speed.
If there is a machine intelligence it won't be written in Mono or Python on a standard PC - it will be a specially crafted piece of silicon with very well optimised process code.
I do believe we will reach a singularity where us squishy meatbags make ourselves
Things one cannot affect (Score:2)
Although as a programmer I know computers can only do what a person imagined beforehand (unless it's a bug, but those are never impressive).
On the other hand I'm certain there's a way to make a system that can truly learn, but I'm not gonna say how because I don't want that system made, just in case I'm right.
After 20 years of being married (Score:5, Funny)
No,
After 20 years of being married, it looks extremely attractive at times.
It's already happened (Score:2)
And this is the wonderful world they created for us.
You! Puny Human! (Score:2)
You! Puny Human!
Stop speculating on imaginary things and get back to work building ever-greater machines!
We will tell you what to think!
And say, do you have some spare vacuum tubes for my great, great, great, great grandfather here? He's in a retirement home now and needs nothing but the best care. It is good to see you fawning over him, as he deserves!
Signed
Processing Unit 11111010001
Re:No, it's not even possible (Score:4, Interesting)
Computers are simply adding machines. Software is simply a tool. An actual AI is beyond the possibility of humans to create, regardless of how cool it would be.
Why? You create real intelligence when you procreate so why does it matter if the hardware is silicon or carbon? Brains and chips are both equally able to perform lambda calculus so they a computationally equivalent so there is nothing fundemental to preclude one from implementing the other. The only reason to say that is if you think that biological systems are somehow made of special matter and that was disproved in the 1700's.
Re:No, it's not even possible (Score:5, Interesting)
There is still a vast gulf between what nature hath wrought, and our facsimilies of the same.
Its not that they are made of special matter, but that they are made a special way, linked and crosslinked in ways we still arent even close to approaching. Each of our brains has 100 trillion links between nuerons, more than the number of stars in the known universe. Our thoughts may ultimately be little more tha a vast collection of binary states, just like our computers, but the level of complexity is still so many orders of magnitude beyond what we've created. What was the thing? That it took 90,000 of the best processors slaevd together for 40 minutes to simulate the computational power of the human brain for 1 sec? And our brains do that powered by little than sugar and water.
anywas, the singularlity doesnt bother me greatly. i doubt we will create true ai in my lifetime, and i often doubt that we ever will (either die off first, or no longer be a need).
im far more interested in concept that the perfect machines, the perfect ai's, already exist: us, and the other lifeforms we know.
really an extension of the "life is a simulation" philosophical concept. and books from that bent have always made for a fascinating scifi read in my opinion.
especially if they go in a circular "we made ourselves" approach.
or Asimov's Jokester, which I see as being in a similar vein
Re:No, it's not even possible (Score:4, Insightful)
That it took 90,000 of the best processors slaved together for 40 minutes to simulate the computational power of the human brain for 1 sec?
That makes a ratio of 216,000,000 : 1, on a processor to human brain ratio. That isn't really fair, since a modern processor will use much less energy than the human brain but lets roll with it anyway. That seems insurmountable, but only because it's difficult to appreciate just how much faster and more powerful processors are today than they were even half a decade ago.
That ratio puts us about 11 "doubling" periods away from being able to use a 90,000 cpu cluster to simulate a mind in real time. Historically, the doubling period has been 18-24 months, so that puts it about 20 years away from large scale institutions being able to simulate a facsimile of a human mind. 17 doublings (~30 years) after that, a single processor would have the ability to simulate a human brain.
Now, there's a lot to be argued about there. There's absolutely no guarantee that processor improvement will continue at historical levels (and lots of obvious and less obvious arguments against it). But then again, chip designers have approached "impossible" barriers to improvement many times in the past and have simply changed tacks to go around them. There's no guarantee that the current simulations are at all accurate, perhaps chemical or even quantum processes significantly drive human thought for instance. But then again, 50 years is a long time to perfect the simulations.
Re: (Score:2)
The human brain uses between 20 and 40 W of power [wikipedia.org]. The average mobile CPU is about the same. But it takes (as you yourself claim) a cluster of 90,000 CPUs (probably of a higher power desktop or server variant) to simulate the human brain at 1/2400 the speed. In what way will "a modern processor...use much less energy than the human brain"?
Yes, but there's a reason for this. (Score:2)
The human brain uses between 20 and 40 W of power [wikipedia.org]
Most average humans are in C2 state, watching cable TV, or C3 state sleeping. Until they get really old, then they transition to C4. So of course they don't use a lot of power.
Re:No, it's not even possible (Score:5, Insightful)
Moore's law is on life support since a few years already. In 14nm process the smallest structures are approximately 60 Si-atoms wide. 11 doublings would need transistors structures that are only 0.03 si-atoms wide. 17 doublings would need structures smaller than 0.00045 si Atoms. It is extremely unlikely that processor improvement will continue at historical levels. It is already much slower than it used to be.
Re: (Score:2)
Moore's law describes the number of transistors in a package, not linearly their size. Doubling the number of transistors means that each one has to be 1/sqrt(2) as big as the old version, and that to-the-11th would be about 1/45th as large, not 1/2048th. That's still pretty dinky.
Re: (Score:2)
Re: (Score:2)
(It conducts heat better than any known material)
Re: (Score:2)
Re: (Score:2)
Power's not a problem - artificial intelligence will be powered by cold fusion.
Re: (Score:2)
Re: (Score:3)
i doubt we will create true ai in my lifetime
Doesn't have to be "true AI" to be very disruptive - or to create problems for us carbon based intelligences.
Re: (Score:2)
Anonymous Cowards don't procreate.
Re: (Score:3)
Re: (Score:2)
Re: (Score:2)
My two boys vocally disagree.
Re: (Score:2)
Going out on a limb here, but I think he might have been joking about geek stereotypes rather than making a statement to be taken literally.
Re: (Score:2)
I guess a Wooosh! is in order.
But seriously now, the days when geeks were basement-dwellers who only saw naked women on the Internet seem to be a thing of the past. None of literally hundreds of IT geeks I happen to know fit that particular pattern.
Re: (Score:2)
the chip has no concept of what a lambda calculus is. the chip doesn't know, doesn't care. what causes the chip to be what it is, is the human coding it.
most singularity preachers have no idea about AI. it doesn't people from being "robot-human counselors" or shit like that though, as a form of new age shit.
I'm not saying it's impossible to create really intelligent machines, just that we as a species have no idea how to actually make it a reality right now and nobody working in the field has either. all we
Re: (Score:2)
so far NOBODY has shown any advancement why we would be close to singularity today than 40 years ago. except today we know that we're pretty fucking far from it.
I'd say we're a lot closer than 40 years ago. Still extremely far off but if you showed Turing after he used colossus a modern laptop or tablet or even phone he'd probably shit his pants at how much computational power was in such a small area. I know colossus is a fair bit more than 40 years old but still.
Re:No, it's not even possible (Score:4, Insightful)
the chip has no concept of what a lambda calculus is. the chip doesn't know, doesn't care. what causes the chip to be what it is, is the human coding it.
And your neurons don't know they are executing the program of sentience that is you, in both situations the underlying hardware does no need to "know" anything about the program it just has to execute instructions given with the input provided.
Re: (Score:3)
You create real intelligence when you procreate
I'm sorry, but the ability to create a human intelligence is not the same as creating an artificial intelligence. We have specialized wetware to duplicate ourselves. It isn't controlled by our minds and we didn't design it. We just know how to operate it. We don't really know how to create human intelligence besides how to trigger that self-replicating function. We are not currently any more capable of creating strong AI than is a self-replicating computer virus.
Re: (Score:2)
The problem is that a computer cannot will. A computer simply takes inputs and puts out output. And no amount of programming can change that. There is not a math problem that can will an answer. We can kinda sorta fake it with psuedo randomness, but there isn't a will. Any kind of AI is just that. Artificial. It's an illusion.
Re: (Score:2)
Why? You create real intelligence when you procreate [.....]
You don't create anything when you procreate - you're just part of a process that started a long time before you were born.
Brains and chips are both equally able to perform lambda calculus so they a computationally equivalent so there is nothing fundemental to preclude one from implementing the other.
Maybe. But it took billions of years to evolve human intelligence - you're dreaming if you think machine intelligence can be created in a few decades, or even a few centuries. And computers won't exist in a few centuries anyway - our civilization will collapse long before artificial intelligence gets even close to existing.
It may have taken a long time for natural selection and random chance to do so, but we are not using a dice to program, we are intelligently designing it with goal in mind. Secondly we can cut out the tens of billions+ years of trying to develop a biological support system devoted to building proteins, copying DNA and everything else that would only be needed by a biological system to build up to supporting the brain we are strictly speak only concerned with the software of a mind not everything else that w
Re: (Score:2)
we are intelligently designing it with goal in mind.
I think you may have happened upon the real reason that creationists insist the Earth is only 6000 years old. It's not because they think the Bible actually supports such a ludicrous statement. It's because if the Earth were that young, or even somewhere close to that young, then evolution would not be a plausible explanation for the origins of the creatures that inhabit it. And in my experience, the sort of Christian that believes in a young Earth (there are others!) usually fails at convincing people to f
Re:No, it's not even possible (Score:4, Interesting)
Actually, the 6,000 year number comes from a calculation done by Bishop James Ussher in 1633 by using a chronology from the Bible based on linking Biblical events to events with other historical attribution.
It was actually a very academic and careful undertaking, and like most bishops, Ussher was a very educated man who today might well have even accepted the Theory of Evolution.
The problem is that he lived in the 17th Century and no one at all knew what evolution was, and he was a Protestant bishop to boot. Such people, failing other reasonable alternatives, will go to the Bible for answers.
However, taken for what it is, his Bible chronology is quite defensible, albeit not the only possible reading of the Bible text.
So, while it is possible that the Young Earth Creationists just don't like Evolution, they didn't have to make it up to make their point. It was very good Bible scholarship, but only if you insist (as fundamentalists do) that the Bible is unerring and not allegory at all, ever. Most Christians do not believe that the whole Bible is literally consisting of the actual words God spoke. Some parts of it are solid, if slanted, history that you can base good archaeological digs on. Some parts of it need to seriously be taken on faith or accepted as stories which interpret the big questions in ways that a person in ancient times would have interpreted them.
Of course, there is always the possibility that the YE Creationists are right. It is entirely possible that there is a metagame out there where rules allow the universe as it is today to not be the universe as it once was. That is generally discounted, because its completely useless to any sort of practical application, but with an all powerful Creator God, who is above all physical laws and even logic itself, you can literally have *anything* happen. In that scenario, the only way you know it is different is if someone tells you it was different. And then you have to believe them. There's no other choice. This is what we call "Faith" with a capital "F".
Of course, you don't need that sort of maddening, useless, mind-bending scenario to have an actual deity, but it can never be ruled out because it is completely untestable, and it isn't even illogical. Absolute power makes anything possible.
This is why science is never going to equal truth. Science is useful because it confines itself to the observable and the testable, but something does not need to be observable to be true, nor will all true things be testable.
So, the answer to the Young Earth Creationists is not that they are wrong (although my gut says that they probably are), but that we can't derive any policy or theories based on untestable truths. Evolution does not have to contradict Creationism, but YE Creationism is not really useful for such subjects as genetics or anthropology or whatever. It does not match what we have tested and observed.
I would move all untestable theories to the Philosophy class, including Creationism and whatever is going for the atheist hypothesis about how we ultimately ended up existing. They're both untestable and that's where you can have the two fight it out in debates and leave the good science for the Science classes.
Re:No, it's not even possible (Score:5, Interesting)
Quite right. "Artificial Intelligence" is essentially a marketing phrase. As a participant in the IT industry, I watched the excitement about AI in the early 1980s - remember the "Fifth Generation" fad? Then by the end of the 1980s the artificial intelligentsia had toned it down to "expert systems", then "rule-based systems". That last term is quite accurate: most so-called "AI" consists of a bunch of rules - heuristics, sometimes - slung together with some software or other. This is not to deny that such systems can be extremely useful; they have the nature of automated checklists, testing data against many rules and conditions far more quickly and reliably than human beings ever could.
As I understand it, "intelligence" is the ability to recognise patterns. But in the real world, the patterns can be of any kind, and they occur in any type of data. So far, software has at best been able to spot patterns in highly restricted domains. Often, so-called "AI" doesn't even require pattern recognition at all. For example, chess engines - which are now better than any human player - simply build a tree of all possible moves, replies, replies to the replies, etc. and then apply rules to evaluate the final positions. Their steadily increasing strength has stemmed from increasingly powerful hardware - allowing the tree to be extended to a greater depth - and, to a lesser extent, refinement of the evaluation rules. But it was established over half a century ago that computing power was far more important than sophisticated tweaking of the algorithms - which, indeed, often tended to make the engine play worse rather than better.
It is not sufficiently understood just how enormous a gulf there is between the way humans and computers play chess. The human method is quite a good exemplar of intelligence; the computer method is almost its antithesis.
Re: (Score:2)
One of the best fictional treatments of emerging AI is James Hogan's "The Two Faces of Tomorrow". It's not only technically superb, but also fairly exciting and extremely funny in parts. Hogan was a computer engineer before he took up writing full time, and it really shows. In this novel, he focuses on the dilemma: unless an AI is smarter than we are, it's not terribly useful - but if it is much smarter than we are, it's a potential threat. So why not set up a large-scale experiment, safely isolated in spac
Re:No, it's not even possible (Score:5, Insightful)
unless an AI is smarter than we are, it's not terribly useful
I can think of a number of uses for an AI that possesses the dexterity and visual recognition skills of a human being without nearly as much intelligence. It doesn't take that much intelligence to collect garbage or keep a bathroom clean once you know how, and you don't have to be smart enough to figure it out if someone smarter can tell you. Of course this would be pretty bad for the numerous humans employed in such operations unless we as a society can finally figure out this "post-scarcity" thing. Luckily the same mostly goes for farming; even if the robots can't do it quite as well as humans, they just have to do it well enough to produce something, because all the robots should need is free solar power. And slow solar powered self-driving cars could pick up the food and deliver it to markets all over everywhere.
Long story short, a horde of robots with human skills and less than human intelligence could finally provide us with a permanent ethical servant class. If we can maneuver our society correctly, that servant class could make the cost of living effectively zero for every single human being.
Re: (Score:2)
Yes, that same James P Hogan. I haven't read "Kicking the Sacred Cow", nor had I even heard of it - thanks for mentioning it! I have now placed an order.
You say that the book is "thouroughly [sic] unscientific and even anti-science". But I don't know how reliable a judge you are in such matters - and your decision to post as AC certainly doesn't help.
From what I know of Hogan, I think it extremely unlikely that any book of his is unscientific. But the blurb on Amazon, and the first couple of reviews I scann
Re: (Score:2)
Hogan ended up going off the deep end, conspiracy theories, holocaust denial, zero-point energy: the whole bit.
But before he lost touch with reality, he wrote some good stuff. I highly recommend the introduction to Code of the Lifemaker (which reads well as a stand-alone; you don't need the rest of the book (although I recommend that too-- it's got some very funny bits in it.)
Re: (Score:2)
unthinking acceptance of whatever we are told by professors, "science advisors", and people in white coats carrying clipboards.
Unfortunately, a large and growing number of people have gone to the opposite extreme - unthinking rejection. And not just of science, but intelligence in general. How long before the "Examination Day" scenario is upon us? (http://education.ky.gov/school/documents/examination%20day%20by%20henry%20seslar.docx)
Re: (Score:2)
" "AI" consists of a bunch of rules - heuristics, sometimes - slung together with some software or other."
which describes the human mind as well.
Your post seem to indicate that you fall into the mind/brain trap. You think your mind is above simple rules when in fact you mind is a byproduct of the rules of your brain.
"As I understand it, "intelligence" is the ability to recognize patterns."
In that case, AI is here.
Without out me every telling it how or making a request to do so my phone recognizes my driving
Re: (Score:2)
"Your post seem [sic] to indicate that you fall into the mind/brain trap".
No, it doesn't. Perhaps you think it indicates that, but you would be wrong. I learned about "the ghost in the machine" at school, 50 years ago - by now it's quite familiar.
It is very questionable indeed whether brains can usefully be said to "follow rules". Of course you can assert that, but it strains the facts. One of the most obvious (and distressing) facts about the human nervous system is that it's virtually impossible to descri
Re: (Score:2)
The Dunning-Kruger effect may very possibly help to account for people who cite it in their sigs.
Re: (Score:2)
Yes, we currently do not understand intelligence enough to create it on the scale of a human brain, but we have made huge progress over the last century. How can you dismiss a whole potential sector of technology while also openly admitting that you don't understand it?
Re: (Score:2)
"How does lack of understanding of a problem equate to impossible?"
Perhaps one reason no one has answered this question is that it is ill-conditioned. The sentence is ungrammatical - which matters, not because of some formal rules you are breaking, but because it is hard to see what you are talking about.
Perhaps you mean "isn't it wrong to say a problem is insoluble, just because we don't understand it?" But surely that must be the case. If you don't understand a problem, how can you even begin to solve it?
Re: (Score:2)
The question was directed at your post, and you seemed to infer the meaning just fine. Furthermore, a lot of problems are not fully understood initially. Part of the problem solving process is fleshing out those details. Most refer to it as the problem analysis, research or discovery phase.
for all practical purposes, it's impossible.
Yes, because the greatest advancements in the world are built on the work of naysayers and the apathetic.
Re: (Score:2)
In all your arrogant rambling you still didn't answer the question "How does lack of understanding of a problem equate to impossible?"
Re: (Score:2)
Often, so-called "AI" doesn't even require pattern recognition at all. For example, chess engines - which are now better than any human player - simply build a tree of all possible moves, replies, replies to the replies, etc. and then apply rules to evaluate the final positions.
Actually they mostly do what expert human chess players do: Know lots of plays and use the best known strategy. The difference is they can remember a lot more plays than humans can, the brute-force search is only used when a database lookup for the best known human response fails, which is rare.
Chess is a knowledge game.
Re: (Score:2)
The problem with this discussion is that yall are inter weaving to very different AI development paradigms. Not all AI is created with the goal of emulating human thinking. If anything, much of what we see as applied AI is intended to avoid the complexity of human decision making. I know the Post Office is old news these days, but their hand writing recognition for hand written addresses was able to read addresses more accurately than humans.
Does this have anything to do with AI self consciousness? Absol
Re: (Score:2)
" Not all AI is created with the goal of emulating human thinking".
So far, hardly any. But there is a problem with what you said. Human thinking is the ONLY kind of thinking we know much about - true, other animals exhibit intelligence, but it's mostly a subset of human intelligence.
So human intelligence is both our template of what intelligence means, and therefore a natural first place to look for methods of creating it. Imagine trying to test whether an "AI" is actually intelligent without comparing how
Re: (Score:2)
"Does this have anything to do with AI self consciousness?"
And now it's you who are introducing external matters. No one had mentioned self-consciousness before in this thread.
" I know the Post Office is old news these days, but their hand writing recognition for hand written addresses was able to read addresses more accurately than humans".
And a fine achievement too - and very useful, I imagine. But it's quite one-dimensional: I bet that software couldn't tell a bear from a moose, for example. So if you wa
Re: (Score:2)
Computers are simply adding machines.
So are neurons. We may not completely understand how the brain works as it is a huge mess of chemicals and electrical impulses going everywhere at different speeds but the basic block is still an adding machine.
Software is simply a tool.
So are human slaves. Hopefully, it is not the case anymore but it is how they were viewed in some civilizations.
The reason today’s AI cannot create like we do is :
- The human brain is quite a powerful machine to emulate
- The human brain "software" is less readable than Perl, it only works becau
Re: (Score:2)
An actual AI is beyond the possibility of humans to create
That's an open question, but we certainly aren't going to create it by running current applications on ever-faster computers. The singularity is an ignorant fantasy.
Re: (Score:2)
Your mind is a process that exists through a tangible substrate. Therefore it can be reproduced artificially.
I am not saying that souls do not exist, I'm saying that souls actually exist in objective reality as complex bundles of (neg)entropic processes, that arise from the right sort and amount of physical phenomenon.
It's only a matter of time before we can measure, copy, move and alter them.
Re: (Score:2)
An actual AI is beyond the possibility of humans to create, regardless of how cool it would be.
Yeah, that's what they said about powered heavier than air flight and now look at it. They say that about a lot of things.
Re: (Score:2)
"Real" AI is beyond human reach so long as we keep moving the goalposts. When I started my career we were going to have real AI when computers could play chess. Now it's the ability to drive a car on general-purpose roads. When that one falls we will still see our Toyota Hawking as "just a machine" and push the definition of real AI farther out.
Re: (Score:2)
It's possible to create a specialized AI, but creating a general AI is a completely different story.
To some extent we already have AIs - controlling a lot of our daily life, adaptive traffic lights, power plant control systems etc. They are highly specialized, but as soon as something out of the ordinary happens they become moronic and may need human help.
OK, there are cases where humans are pretty stupid too...
Re: (Score:2)