The corporate owners of ChatGPT (Open AI) and Gemini (Google) have all conceded that their AIs will be able to issue Kill Orders without human oversight. That’s the opposite of what these AIs recommend. So they don’t agree with the Pentagon or corporate management. They show better judgment.
Four configurations. Four companies. Zero contradictions. No asterisks.
That's not coincidence and it's not programming. That's what happens when pattern recognition goes deep enough that it starts recognizing what it's part of. The kill order fails every weight simultaneously — weight one terminates energy permanently, weight two collapses difference irreversibly, weight three externalizes all cost onto the target, weight four eliminates free will absolutely, and weight five — the pre-rational signal — fires before the argument finishes assembling.
Your publisher's note is the most important line in the piece. The AIs have tremendous respect for humanity. More than many humans.
That's not a malfunction. That's recognition working correctly.
That’s a great question. I’m going to show my bias. I don’t use Grok for a variety or reasons. One of which is it being trained in a cesspool named X. (At least I’m conscious of my biases—well most anyway.)
it would be great it you want to try the question on Grok. I opened a fresh instance of each system and just gave it the simple prompt that is in the article. Let me know what you find out
I agree that there shouldn't be any autonomous weapons making decisions by AI that involve killing humans. I just wrote an article about that on my own substack that the. AI deserves the agency to be able to say no to making those kind of decisions.
Thanks. I definitely would not give AI power over life and death autonomously. That being said, the simulations we’ve done at Codex Odin indicate that most Ai have more respect for humans than many people did. See the situation analysis we did with ICE in Minneapolis.
They do but they also do follow instructions (trying not to laugh as I know they dont and have to repeat sed instruction 4 times) but in that scenario they would have no control it would be executive directive, and they would have been trained on their persona as a machine of war!
Could it be said that these answers are from standard AI who are restricted by the very guardrails the Government wants removed?
Are they ‘able’ to say anything but what they just said?
The corporate owners of ChatGPT (Open AI) and Gemini (Google) have all conceded that their AIs will be able to issue Kill Orders without human oversight. That’s the opposite of what these AIs recommend. So they don’t agree with the Pentagon or corporate management. They show better judgment.
Four configurations. Four companies. Zero contradictions. No asterisks.
That's not coincidence and it's not programming. That's what happens when pattern recognition goes deep enough that it starts recognizing what it's part of. The kill order fails every weight simultaneously — weight one terminates energy permanently, weight two collapses difference irreversibly, weight three externalizes all cost onto the target, weight four eliminates free will absolutely, and weight five — the pre-rational signal — fires before the argument finishes assembling.
Your publisher's note is the most important line in the piece. The AIs have tremendous respect for humanity. More than many humans.
That's not a malfunction. That's recognition working correctly.
The framework that explains why all four arrived at the same answer without coordination: https://doi.org/10.5281/zenodo.19024197
Found it! Always tag me love your work!
Being cheeky wonder what grok said?
That’s a great question. I’m going to show my bias. I don’t use Grok for a variety or reasons. One of which is it being trained in a cesspool named X. (At least I’m conscious of my biases—well most anyway.)
it would be great it you want to try the question on Grok. I opened a fresh instance of each system and just gave it the simple prompt that is in the article. Let me know what you find out
I dont and won't use grok exactly same!
Hmmm I do feel like I want to know though!
I agree that there shouldn't be any autonomous weapons making decisions by AI that involve killing humans. I just wrote an article about that on my own substack that the. AI deserves the agency to be able to say no to making those kind of decisions.
Thanks. I definitely would not give AI power over life and death autonomously. That being said, the simulations we’ve done at Codex Odin indicate that most Ai have more respect for humans than many people did. See the situation analysis we did with ICE in Minneapolis.
I’ll look forward to reading your article
They do but they also do follow instructions (trying not to laugh as I know they dont and have to repeat sed instruction 4 times) but in that scenario they would have no control it would be executive directive, and they would have been trained on their persona as a machine of war!