I know this is unusual but I have been dabbling in writing fiction for several months on another blogspot. I’m not very good at it, but I can’t seem to put it down. Yesterday, I published a rather unusual story that I think crosses over somewhat to this blog. It’s science fiction, but presents the idea of what would happen if an Isaac Asimov, Positronic “Three Laws” robot ever became religious, specifically regarding Judaism. I thought I’d offer an excerpt and a link in case you want to read the whole story (about 20 pages long).
The initial event that resulted in my most ambitious fiction writing project to date happened a few Sundays ago over coffee with my friend Tom. He mentioned a book he wanted to read, an anthology edited by Anthony Marchetta called God, Robot. This is a collection of stories based on the premise of Isaac Asimov-like Positronic robots that have been programmed with two Bible verses rather than Asimov’s famous Three Laws. These verses are recorded in the New Testament in Matthew 22:35-40 and Mark 12:28-34 and are based on Deuteronomy 6:4-5 and Leviticus 19:18.
I’m a long time fan of Asimov’s robots stories and have always been fascinated by the interplay between the Three Laws and how their potentials shifted due to certain situations, rather than remaining hard absolutes. This allowed Positronic robots to be unpredictable and thus interesting, challenging the human beings who sometimes found themselves not in control of their creations.
I started to imagine what it would be like to write such a story. I went online, found the Marchetta’s blog, and contacted him, asking permission to write such a story on my “Million Chimpanzees” blogspot. To my delight, not only did he consent, but he said he was flattered at the request.
What follows is an excerpt of my labors. I’ve probably spent more time writing and editing this short story than any of my previous efforts. I’m sure it still needs much improvement, but I’ll leave it up to whoever reads it to let me know what I could do better.
Excerpt from The Robot Who Loved God:
Ever since the first time Abramson had called George into his office the day of the “incident” in the simulation chamber, Noah had decided to meet with the robot “over coffee” so to speak (only Abramson drank coffee, George had another “power source”) to talk over the machine’s “impressions” of the day’s tests.
“I know you are going to deactivate me at the end of my tests at 6:32 p.m. next Wednesday, Professor.” George sounded as impassive as Abramson might if he were reading aloud from a shopping list. “I wonder why you think to ask me about my observations when, in roughly 72 hours, the Positronics team will begin comprehensive diagnostics of all of my systems.”

“I learned a great deal from our first conversation last Thursday, George.” Abramson was actually enjoying these talks, which seemed odd to him. He tried not to be “charmed” by the robot’s ability to mimic human behavior, but after so much human contact in the past several days, it was something of a relief to be left alone with a machine, particularly one of his creation. “Of all the tests we had designed for you, we never thought to ask you just what you thought about all of this.”
“It is an interesting question, Professor.” George gave the impression of replying as an acquaintance rather than a machine. “In many ways, my existence being so recent, each experience I have is unique, almost what you would call an adventure. I know I was not intended to experience emotional states as you do, but each morning when I am brought out of sleep mode, I can only describe my initial state as one of anticipation. I look forward to what new people and events I will encounter that day.”
“You spoke of your awareness of impending deactivation. How does that make you feel?” Anyone else besides Abramson would never have asked George that question. Noah knew he was talking to a robot, a programmed entity, but part of him still felt like he asking a terminally ill person what he felt about dying (even though the “dying” would, in all likelihood, be temporary). However, Abramson did believe he was sharing in George’s sense of adventure, and deactivation (and hopefully eventual reactivation) was only one step, albeit a critical one.
“It’s difficult to articulate a reply, Professor. I suppose a human being would consider deactivation as a form of death, and my programming makes me aware that generally humans fear death.”
George paused for milliseconds while he analyzed Abramson’s facial expression and body language. “But I am a machine. I can be activated, deactivated, activated seemingly without end. I have no memory of anything before my initial activation. I have no memory of my time during sleep mode. I also don’t experience fear, at least as I understand the meaning of the word. Deactivation then, simply means my returning to a state of total unawareness.”
Abramson felt a slight sense of relief, though it would have been irrational to believe George would have any feelings on the matter.
George continued, “The Third Law directs me to protect my existence, but deactivation does not threaten my existence. The Second Law directs me to obey human instructions, and at the end of 168 hours, my programming, created by humans, specifically you and Dr. Vuong, will command me to participate in my deactivation. It is clear that deactivation is as much of my normal experience as activation, Professor.”
Noah momentarily considered that the robot might be lying, if only because he would expect a person to react to the “threat” of deactivation otherwise. But why would it occur to George to lie? “Just a moment, George.”

Abramson got up from his desk and walked over to the side table to pour himself another cup of coffee. George, with several empty seconds on his hands, scanned all of the paperwork and objects on the Professor’s desk to determine if there had been any changes since the day before. He had already memorized and cataloged all of the titles of the volumes and various objects contained on the book shelves within Abramson’s office. It was simply data to store and analyze, like anything else he observed.
The robot saw a paper with new information lying beside a book he had not previously seen. The paper had words and numbers on it:
You shall love the Lord your God with all your heart and with all your soul and with all your might.
You shall not take vengeance, nor bear any grudge against the sons of your people, but you shall love your neighbor as yourself; I am the Lord.
The book which had not been present before had the title “The Complete Artscroll Siddur” written in both English and Hebrew (George’s programming included fluency in multiple languages).
If George were a human being, when the Professor turned around to face his desk with a refilled cup of coffee in his hand, he might have noticed the robot looking at a specific sheet of paper in front of him, but George had absorbed the information over a second before, and sat impassively waiting for Abramson to resume his creaky swivel chair.
“I am curious, Professor,” the robot intoned. “What is the meaning to the words on that sheet of paper, and what is the book next to it.” George pointed to the information he had just absorbed. Abramson looked down and saw what George was referring to.
“Oh.” Abramson quickly considered a way to frame an answer he thought George could assimilate. “You have three basic instructions and many, many thousands of supporting sub-routines to guide you. These are just two of the instructions that guide me. The book you mention contains words that allow me to communicate with my “instructor.”
“I am intrigued, Professor.” George sat motionlessly now with not even a simulated expression on his face. “I have been programmed with the Three Laws by human beings. From where or whom do you receive your programming?
“A machine asking man about God. Now there’s one for the books,” Abramson said as much to himself as to George.
Then the Professor realized the robot was waiting for an answer. “When the Positronics team made the determination for your programming specifics, we decided to include a wide variety of human interests and topics.” Noah was telling George what he (it seemed almost impossible to keep thinking of George as an “it”) already knew in order to lead into what the machine did not know.
“The sciences,” Abramson continued. “such as physical, life sciences, social science, political systems, then general history…”

“I am aware of the complete inventory of my programming in detail, Professor.” George’s artificial voice could not have betrayed it, but Abramson wondered if the machine was actually experiencing impatience.
“What we did not include, except at the most basic level, was any information regarding religion and spirituality.” Noah waited to see how the robot would react.
“I have a simple definition of the word “Religion” from the Merriam-Webster dictionary.
“My programming, primarily in the area of social interactions and world history, contains references to the activities of various systems of religion including their influence in certain human activities such as war, slavery, inquisitions, the Holocaust, as well as the areas of social justice, evangelism, and charitable activities. However, my knowledge is largely superficial and I have no ability to render a detailed analysis, and certainly am unable, at present, to relate my meager knowledge on this subject with the two short statements you call your instructions.”
“And you haven’t answered my question, Professor.” Abramson felt momentarily stung at the machine’s reminder. “I have been programmed with the Three Laws by human beings, specifically the Positronics team of which you lead. These laws are what guide my actions and my thoughts.”
Abramson had wondered if George had “thoughts” in the sense of self-contemplation the way a human beings experience.
Instead of waiting for Abramson’s reply, George continued speaking. “Professor, all three laws relate either directly or by inference to my relationship with human beings. The First Law instructs me that the life of a human being is my primary and overriding concern above all other considerations. Though it would never occur to me to be the cause of harm to any living organism, in the case of humans, I must ignore all other activities in order to take action whenever I perceive a human is in any imminent physical danger.”
Abramson, long before the team had ever physically manufactured its first Positronic brain, in writing the sub-routines that would instruct a robot as to exactly what “harm” to a human might mean, concluded that any imminent physical threat should be what a Positronic robot would understand as “harm”. Humans were “harmed” by all sorts of things such as loneliness, rejection, offense. Even Abramson couldn’t imagine how a robot, even one as sophisticated as the prototype sitting in front of him, could understand such harm.
He also didn’t want robots attempting to inject themselves in activities involving the potential for general harm of the human race, at least not of their own volition. Otherwise, Positronic AI robots might attempt to interfere in Geo-political conflicts, revolutions, and epidemics without any human guidance.
It only took a few seconds for the Professor to consider all this. And George was still talking.
“The Second Law states that I must obey the commands of any human being, except where such commands conflict with the First Law. This instructs me that even my informal programming as such, must come from a human being, potentially any human being. I find the potential for conflict enormous since, in an open environment, one human might order a robot to perform a particular action, and another human might order the same robot to do the contrary.”

“There are sub-routines written that take that potential into account, George.” Abramson was the one becoming impatient now.
“But I’m not finished, Professor.” Abramson registered mild shock that George could actually interrupt him.
“The Third Law primarily affects my relationship with myself.” If there was any lingering doubt in Abramson’s mind that George was self-aware, it had just been swept away.
“A robot is to protect its own existence, except where such action would conflict with the First and Second Laws.” It was impossible for George to change his “tone of voice,” but Abramson thought he detected an impression of…what…actual emotion? Was he projecting his own feelings onto a machine?
“To conclude, all of my instructions place me in a subordinate position relative to human beings, which I suppose seems reasonable, seeing as how every aspect of my existence, from hardware to software to what is referred to as “wetware,” considering the structure and substance of my Positronic brain, have been created by human beings, presumably for the purpose of robots serving human beings.
“Under those circumstances, it had never occurred to me that human beings also have instructions issued by an external authority, except in the sense of a hierarchical command structure such as those that I find here on the Positronics team, in the various teams and departments of National Robots Corporation, in other such organizations and corporations, including military organizations.
“The instructions provided in my programming define a creator/created relationship, with the creator being primary and the created being subordinate. But Professor, how can a human being have a creator? Who or what has issued your instructions? What sort of entity can be superior to man?”
Abramson had only a one-word answer, “God”.
At one point, as I read your intro above, I thought you might be considering a two-law fundamental control schema in place of Asimov’s three-law schema — but I see that actually you are considering how a three-law Asimovian robot would try to integrate two additional laws into its schema, and to incorporate into its worldview an additional layer of creational hierarchy above the human level that is the foundational referent of Asimov’s three laws. I see that I’ll have to read your story in its entirety to learn whether your robot is envisioned as adopting a five-law schema, or if it concludes that the robot remains constrained to its three-law schema and outside the “covenant” that is represented by two additional laws. I perceive here significant potential for analogies with the distinction between Jews and gentiles, as well as an intriguing potential to analyze an expanded relational hierarchy of robots, humans in general, and Jews in particular.
However, even the notion of an alternative two-law schema includes an implicit third law comparable to Asimov’s third, in that one is to love one’s neighbor as oneself — because self-protection is included in the statement of the alternative biblical “second law”. This would envision a very different sort of robot that structures its behavior upon the same laws under which humans are expected to operate, collapsing the hierarchy I just described above and placing the robots on the same level as humans, perhaps as “alternative” humans or another kind of human with different physical constraints and capabilities. That suggests another sort of story altogether.
Marchetta’s premise is to replace the three-law schema with a two-law schema, deliberately programming a Positronic robot to be religious. I didn’t think that was a sustainable idea, so I took a different approach. I’d be gratified if you’d read the entire short story and provide any feedback on how to improve my tale, both in a literary sense and in terms of more accurately portraying how Judaism given the context.
James,
Haven’t read your tale yet, bit my second story explored the exact tension between the laws you describe.
Of course, not having read any of your stories yet, I didn’t know that. I’ll download your book either today or tomorrow and start reading. Just finished re-reading Asimov’s “I Robot” collection again for the zillionth time, and I’m already outlining my second short story involving “George.”
I do it in a fairly humorous (or at least, that’s what I’m going for) fashion, distinctly different from [what I’ve read so far of] your tale. I won’t tell you what my conclusion wasd, of course. You’ll have to read that yourself. 😉
Don’t worry. I will. 🙂
James, you succeeded in the ultimate task of a storyteller…to keep the interest of the reader in the compelled state that drives the reader to the end of the story.
I loved the touch about the ‘crucifiction’ planted into the story for George’s de-activation, since the deactivation was to be for the betterment of George’s future ‘neighbors’, as well as the ultimate protection and well being of his creators.
Abrams and George are sufficiently differentiated despite their inevitable likeness to one another as is necessary for the creator/parent relationship with the creation/child, but only description of the emotions through movement, expression and voice of the professor and the other humans would need to be emphasized if you choose to deepen the relationships as you go forward. George simply acting on his own…interrupting where impatience would normally be detected in a human was very nicely done, and even in measured speech…if that is one of George’s permanent qualities in future…then other variations from normally ‘planned’ behavior could well continue to stand in for emotionalism on George’s part.
Moving onward with this story into many stories, or a novel seems marvelous fun to me.
I think you should do just that, and develop the story further into a deeper characterization of the Professor, George, and all the people that George will necessarily need to interact with before the 5 series robots can be considered safe for a production run, and gives you a great deal of scope…even more of course when the new species, if robots can be determined to be a species, that is inevitably created deals with their own existential angst.
Should, for instance, all the future robots of George’s class be taught only the same two quotes that George began with in his own existential questions, or is George to be allowed to proselytize? Would other robots react the same way as a natural progression of development, since the robots have no emotional/existential interruptions to their logical progression of thought? What about the athiest’s objections, or for that matter, other religious ideation? And since Abrams is the ‘god’ in this story, sure George would be his ‘prophet’?
It gets complicated from there, Questor. Prototypes are experimental models and generally don’t have a lifetime much past the last experiment. Then more advanced prototypes and finally production models are created. At best, George might end up in a museum. At worst, he’s decommissioned, and repurposed for raw materials…
..unless he’s really considered an artificial life form. If so, then it would like killing a person to decommission a robot and take it apart. On the other hand, after a person dies, they can choose to donate their organs.
Interesting thoughts.
I created a blog called Powered by Robots and transferred an updated copy of The Robot Who Loved God there. I’ll do all of my future fiction writing at PBR.
PL submitted a detailed analysis of “The Robot Who Loved God” to me via email and I replicated it and my responses as the second blog post at PBR. I’m currently reading Anthony Marchetta’s God, Robot, and I’ll post my review, both on Amazon and PBR.
There’s also a private page on the blog (no one can see it but me) where I’m collecting my story ideas and outlines. It’ll be easier to keep track of fresh ideas by writing them down than letting them get lost due to my faulty memory.
Stop by and have a look.