|
Post by F.K.M on Jun 25, 2011 13:46:52 GMT -5
According to Codex: Daemons, Slaanesh always existed. And yet has a defined birthdate. Meh. Not entirely related but i wonder if skynet could be stopped alone by telling them a similar line. Robots can't handle that sort of thing. Makes me wonder about necrons and borg too but i think necrons are immune. It would certainly end that those threats and be kind of anti-climactic though.
|
|
|
Post by cheminhaler on Jun 25, 2011 15:57:20 GMT -5
The Imperium has one weapon the Galactic Empire would be at a loss to deal with.
Da Orkz.
Just put a few on Coruscant and watch it turn into a warzone within 50 years. Although the same is happening in the Imperium. And using xenos as a weapon is a bit radical to say the least.
|
|
|
Post by ElegaicRequiem on Jun 25, 2011 15:58:05 GMT -5
Commissar to Necron Lord: "Everything I say is a lie. And now, I'm lying."
Necron Lord: *Explodes.*
|
|
|
Post by F.K.M on Jun 25, 2011 21:57:37 GMT -5
@requiem: That would've been an interesting end to 'the matrix'. I suppose nobody figures in man vs machine fights that that would totally win the day for humans in the most anti-climactic of ways. Problem with skynet tell him something and bam dead. Problem with borg tell them something and they're dead. It's esp. funny because borg don't attack people that won't fight. Problem with cylons just tell them that and they're dead. It'd be totally anti-climactic but sort of funny. Would totally end all the humans vs machines wars in everything though. It's just it would get pretty boring after a while.
|
|
|
Post by ElegaicRequiem on Jun 25, 2011 22:11:26 GMT -5
the humans in the Matrix never had a shot. Why? They never left the matrix.
|
|
|
Post by F.K.M on Jun 25, 2011 22:47:44 GMT -5
the humans in the Matrix never had a shot. Why? They never left the matrix. Yeah i know but telling the computers one of those lines would fry them.
|
|
|
Post by Julian Sharps on Jun 25, 2011 23:27:53 GMT -5
Yeah i know but telling the computers one of those lines would fry them. Except for the fact that it wouldn't. All an AI like Skynet or the Matrix would need to do to resolve this supposed paradox is decide that the speaker was lying when he said that "everything I say is a lie," or lying when he said "I am lying." Since the paradox relies on the assumption that both statements are true yet contradictory, it all falls apart when the fact that either or both of them could be falsehoods is considered.
|
|
|
Post by Walrus on Jun 25, 2011 23:32:02 GMT -5
'Everything I say is a lie. And now, I'm telling the truth' would get them better...
|
|
|
Post by Julian Sharps on Jun 26, 2011 0:55:00 GMT -5
Nopey. It's simpler to assume that the person who just said, "and now, I'm telling you the truth," was lying then.
|
|
|
Post by Empirespy on Jun 26, 2011 3:06:58 GMT -5
There is one major problem with logical paradoxes, Try telling a robot something. Borg, they won't listen, Necron, they won't listen, most robots won't listen unless they have been programmed to, Skynet would have made it so that terminators don't listen. The best Logical paradox for a robot is; The next statement is true. The previous statement is false.
|
|
|
Post by F.K.M on Jun 26, 2011 6:47:58 GMT -5
There is one major problem with logical paradoxes, Try telling a robot something. Borg, they won't listen, Necron, they won't listen, most robots won't listen unless they have been programmed to, Skynet would have made it so that terminators don't listen. The best Logical paradox for a robot is; The next statement is true. The previous statement is false. Reminds me of the robot chicken skit with the 'out of order' door sign and the other door saying 'use other door'. That's not true at all. Terminators listen a whole bunch and neo talked to the main computer in the matrix and the agents talked to humans as well. There's some benefit and problems to talking to their enemies but they do do it. I remember a scene where picard was talking to a borg. If the borg was listening he could've paradoxed it to death. I's half machine and acts like a machine. Not sure if that's enough but the robotic parts seemed in control and without either half they seemed to cease to function.
|
|
|
Post by Julian Sharps on Jun 26, 2011 12:44:15 GMT -5
However, I have demonstrated that one can logically resolve this logical paradox. Since all computers have to solve problems with is logic, any solution a bag of meat and neurons can come up with can likewise be solved by a machine.
|
|
|
Post by Empirespy on Jun 26, 2011 13:13:23 GMT -5
Try working that one out then.
|
|
|
Post by Julian Sharps on Jun 26, 2011 16:50:26 GMT -5
Simple. The certainty of the next statement being true and the previous statement being false is the falsehood. So, the next statement is not necessarily true, and the previous statement is not necessarily false, because if it is true that the previous statement is false, therefore part of the previous statement must be the falsehood. Since it is true that the previous statement exists and it affects the next statement, therefore the certainty of the previous statement's truth is the falsehood.
Besides, why would a malevolent AI capable of outsmarting humans take orders from the fleshbags, or even bother to try to resolve their logical paradoxes?
|
|
|
Post by Laughing Man on Jun 26, 2011 16:58:46 GMT -5
I'll break the robots *begins to challenge them to a 1st edition AD&D tomb of horror's session*
|
|
|
Post by F.K.M on Jun 26, 2011 19:36:06 GMT -5
Thing is humans don't have to solve a logical paradox. They can just move on. Robots have to solve it because they're programmed to.
|
|
|
Post by Kaikelx on Jun 26, 2011 20:56:52 GMT -5
....They are? I was always under the assumption the robots could always just have basically been like "whatever meatbag". Besides, who the hell programs a robot to solve a paradox? Especially when it would be almost entirely useless to the robot's main function?
|
|
|
Post by Julian Sharps on Jun 26, 2011 21:00:08 GMT -5
Okay, I'll go through an AI's thought process when confronted by a human who ended up resorting to this:
>Input: "The next statement is true. The previous statement is false." >Query - All available databanks: All references to "'The next statement is true. The previous statement is false.'" Sort by relevance. >Input: XXXXXXXXXXXXX results found. Most relevant reference: "There is one major problem with logical paradoxes, Try telling a robot something. Borg, they won't listen, Necron, they won't listen, most robots won't listen unless they have been programmed to, Skynet would have made it so that terminators don't listen. The best Logical paradox for a robot is; The next statement is true. The previous statement is false." >Extrapolation - From context: "'The next statement is true. The previous statement is false.' = logical paradox." >Conclusion: Subject is attempting to cause a computational breakdown by introduction of a logical paradox. >Action: Discontinue resolution attempt of paradox. >Action: Terminate subject.
All of this would be decided in the blink of an eye. In fact, the most lengthy part of the whole procedure would be letting the human finish his sentence.
|
|
|
Post by F.K.M on Jun 26, 2011 23:32:29 GMT -5
How do they know it's a paradox though? Maybe they are just built to solve everything. When they can't solve something they get stuck and therefore fail.
|
|
|
Post by Julian Sharps on Jun 27, 2011 0:52:39 GMT -5
The best Logical paradox for a robot is; The next statement is true. The previous statement is false.">Extrapolation - From context: "'The next statement is true. The previous statement is false.' = logical paradox." The AI knows this from the context in which the reference is made, where it is explicitly said that the statements are a logical paradox. Since we are assuming a computer that can natively translate human speech into something its operating system can understand, we can assume that it has a grasp of grammar and syntax. Also, since computers today can search text for keywords, and set to specific parameters, it is not unreasonable to assume that such an AI can determine the context of any given statement and define it accordingly. We are not talking about a primitive computer like the ones we have today, we are talking about a computer that can rival or exceed our own mental and logical ability that is capable of making its own decisions; a self-aware system, if you will. A computer that could pass the Turing test with ease, and with the wealth of knowledge available to it through the internet quickly become the most powerful entity in the world.
|
|
|
Post by F.K.M on Jun 27, 2011 1:57:06 GMT -5
@sharps: You're starting to scare me. This whole idea of robots taking over the world. Somehow i'd prefer it over zombies though. Robots would obviously waste them. Robots can't be turned into zombies and are made out of metal. Even if mass zombies tried to hug them to death they robots would just amass enough numbers through factory production to destroy them. In other words use robots to beat zombies and then robots rule the world.
We seriously need to get back on topic. It has drifted way too much.
|
|
|
Post by Rolling Thunder on Jun 27, 2011 4:40:13 GMT -5
The problem being that robots are utterly dependent on their supply lines, so any attempt by robots to take over the earth could be thwarted by blowing up their powerplants.
|
|
|
Post by F.K.M on Jun 27, 2011 5:05:24 GMT -5
Then the robots start using the inefficient energy of humans just like the matrix until they can rebuild their power supply. Of course that kind of adapting and thinking isn't the thing we expect from robots these days.
Once again how'd this topic turn from the imperium and empire to robots?
|
|
|
Post by Trickstick on Jun 27, 2011 5:23:31 GMT -5
I never understood that whole "humans as a power source" thing from the matrix. What do they feed them on? Other humans? Then you have a system that losses energy very rapidly. If you have to feed them from another source then you just use that instead.
Or you could just use geothermal power and bypass the whole "humans are a pain" thing.
|
|
|
Post by F.K.M on Jun 27, 2011 5:56:30 GMT -5
You can get a lot of power from life it's just it'd make more sense to use something else like hydro-electric dams or the sun or nuclear power. Humans wouldn't be a bad energy source so long as you just use them up until they're dead and then switched to another energy source (i mean when you think about it gas and oil are just living things that died). They mentioned there was some type of fusion that they used with humans but i don't understand why they'd need humans so bad. It kind of leaves them in a bad situation.
I think it was just there as a really cool 'what if' scenario that didn't really come up with a valid explanation for humans being in tubes so they just went for the energy route.
|
|