I’m having some frustrating connection problems today. I can get to Google sporadically, but I can’t open search results, nor can I get to Amazon. I’ve tried a Windows and Mac computer and multiple web browsers but it doesn’t make a lot of difference. I’ve rebooted my modem a few times and it seems to help temporarily, so I don’t know if it’s my connection or if there’s some sort of horrendous DDOS event attacking part of the internet.
The reason this is particularly frustrating just now is that in one of my Gmail accounts (when I can get to it), I found a Bookbub notice for an eBook called A Time to Every Purpose by Ian Andrew. The Google books blurb says about the book:
After eighty years of brutal Nazi domination millions have been persecuted and killed in a never-ending holocaust. But this oppressive and violent world still retains a few heroes;Now Leigh, the preeminent scientist of her generation, is pitched into the final battle. One that ranges from London to Berlin to Jerusalem. But will she destroy what she loves to save what she can only imagine? After one more murder and one chance remark, now is the time to reset history. The new novel by Ian Andrew.
However, the Bookbub description is more interesting:
Visit an alternate timeline where Jesus was never crucified, leading to 2,000 years of peace — and a society totally unequipped to contend with the rise of Nazism. Will inventor Leigh Wilson destroy everything she knows to reset history?
I’m tempted to buy the book (although since I cannot currently reach Amazon, I don’t know how) just to see how the author pulled off not crucifying Rav Yeshua and yet had him fulfill his role of Messiah in the first century CE (which is what would presumably have to happen for their to be 2,000 years of peace).
On my sister blog Powered by Robots. I’m quite tempted to write a short story describing the start of this alternate history, but knowing what I know theologically, I can’t imagine the circumstances in which Rav Yeshua would have deliberately avoided the crucifixion and began his reign as King Messiah at that point in history.
It would mean rewriting certain very significant portions of the Bible. Not just in the Apostolic Scriptures, but in the portions of the Tanakh that point to Moshiach.
However, it is a compelling concept. I wonder how best to approach it?
Sarah stood across the street from her Bubbe’s and Zayde’s house. The evening of December 24th, the first night of Chanukah this year, was cool, even in the Los Angeles suburb of Brentwood, but she had dressed for the occasion. She made sure the coat she was wearing wouldn’t attract attention in case anyone saw her.
Sarah wished she could get closer. She wished she could just knock on the door and go inside, but she wasn’t supposed to be there and she wasn’t supposed to change anything.
Wait! There they were. She could see them through the window in the front of their house. Bubbe and Zayde. Her big brother Aaron, all of seven years old, was excitedly jumping up and down next to them. Sarah couldn’t hear anything of course, but she could see everyone’s facial expressions and imagined Zayde firmly but kindly helping Aaron to calm down.
Tradition says that the Chanukah menorah must be placed either in a central area of the home or by a window. The latter is to proudly announce that a miracle had occurred and this was the commemoration of that miracle. Sarah was watching her family tonight thanks to a miracle she had created herself.
This tale is more flash fiction than a science fiction short story so you can read all of A Time to Follow Your Heart in just a few minutes. Let me know what you think.
This is the sequel to my previous science fiction short story The Robot Who Loved God. It’s about the same length as my previous tale and hopefully will successfully expand upon the concepts I introduced in the first story. In “The Robot Who Loved,” the prototype robot George was deactivated at the end of a week’s worth of tests. During that week, George was accidentally introduced to the concept of God, particularly the God of Israel, the God of his creator, Professor Noah Abramson.
Although George was considered a failed experiment by National Robotics Corporation CEO Richard Underwood who did not allow Abramson to reactivate him, a critical problem has been discovered that cannot be solved by human beings. Will George be able to find a solution to the problem of how to re-create a working Positronic brain when the finest human scientific minds cannot, and how will George’s apprehension of God affect the project?
Here’s a brief excerpt from the short story. I hope you’ll enjoy it enough to click the link at the bottom and read the whole thing.
Margie Vuong, as usual, was the first member of the Positronics team to enter the lab, today just after 4 a.m. She found George is his alcove in sleep mode, which she didn’t expect. Abramson had permitted the robot to forego “sleep” in order to work on the mystery of the non-reproducible Positronics brain, so she thought she’d find him still at it.
Most people thought Vuong was an insomniac, but ever since she was an undergrad, she found she needed relatively little sleep, and she enjoyed the quiet of the early morning hours when almost everyone else was still in bed. It left her alone with her thoughts which usually was the company she most enjoyed.
However last night, even when Margie wanted to sleep, she couldn’t. So she stayed awake and caught up on personal emails, read some recently published technical articles, and for several hours, binge watched the reboot of Firefly…entertaining, but not as good as the original.
This morning, Vuong regretted never having developed the taste any caffeinated beverages. Her ex-husband had tried to get her interested in his hobby of drinking coffee from beans he had roasted himself, but she didn’t find the smell or taste palatable.
Vuong had logged into her terminal and was checking emails when George spoke: “Good morning, Dr. Vuong. I hope you slept well.” The robot could monitor her vitals better than a Fitbit and knew damn well she barely slept at all.
Resisting the urge to snap back at the machine with some snarky remark, Vuong instead replied, “Good morning, George.”
“Dr. Vuong, I would like to ask a favor of you.” What favor could she possibly do for a robot and was it something she was willing to do?
“Since Professor Abramson has asked that there be no digital footprint of our investigation, I cannot send out a group-wide email or text informing the team of the conclusion to my investigation. When the team arrives, can you arrange for a meeting in the conference room with all senior members?” Each team lead had a small staff of technicians at their disposal, and it was clear George didn’t find their presence required to hear his announcement.
“Wait! What?” Had George actually solved the problem? Did he know why she and the Professor couldn’t create another working Positronic brain?
“I believe 9 a.m. should be an appropriate time for such a meeting, since Dr. Miller, the most tardy member of the group, typically arrives no later than 8:30.”
“Uh, sure George. Um…you really solved the problem of duplicating a Positronic brain?”
“I would prefer to announce my findings to the whole team, Dr. Vuong.”
“Care to give me a hint?” The one night when she let Abramson convince her to go home rather than stay late at the lab was the night when George found out where she and Noah had gone wrong. She wanted to hate George for that, but she wanted the answer even more.
“I don’t believe I know how to ‘hint,’ Dr. Vuong.”
In a moment of resentment, Margie counted all of the different ways she could insert an invasive program into a Positronic matrix. No, this wasn’t George being deliberately obstructive. The robot was just being transparent with the team as he was instructed to do. No withholding information from some team members and only revealing it to others.
It didn’t occur to Vuong that George was withholding a great deal of information from the team. It just had nothing to do with Positronic brains.
I know this is unusual but I have been dabbling in writing fiction for several months on another blogspot. I’m not very good at it, but I can’t seem to put it down. Yesterday, I published a rather unusual story that I think crosses over somewhat to this blog. It’s science fiction, but presents the idea of what would happen if an Isaac Asimov, Positronic “Three Laws” robot ever became religious, specifically regarding Judaism. I thought I’d offer an excerpt and a link in case you want to read the whole story (about 20 pages long).
The initial event that resulted in my most ambitious fiction writing project to date happened a few Sundays ago over coffee with my friend Tom. He mentioned a book he wanted to read, an anthology edited by Anthony Marchetta called God, Robot. This is a collection of stories based on the premise of Isaac Asimov-like Positronic robots that have been programmed with two Bible verses rather than Asimov’s famous Three Laws. These verses are recorded in the New Testament in Matthew 22:35-40 and Mark 12:28-34 and are based on Deuteronomy 6:4-5 and Leviticus 19:18.
I’m a long time fan of Asimov’s robots stories and have always been fascinated by the interplay between the Three Laws and how their potentials shifted due to certain situations, rather than remaining hard absolutes. This allowed Positronic robots to be unpredictable and thus interesting, challenging the human beings who sometimes found themselves not in control of their creations.
I started to imagine what it would be like to write such a story. I went online, found the Marchetta’s blog, and contacted him, asking permission to write such a story on my “Million Chimpanzees” blogspot. To my delight, not only did he consent, but he said he was flattered at the request.
What follows is an excerpt of my labors. I’ve probably spent more time writing and editing this short story than any of my previous efforts. I’m sure it still needs much improvement, but I’ll leave it up to whoever reads it to let me know what I could do better.
Ever since the first time Abramson had called George into his office the day of the “incident” in the simulation chamber, Noah had decided to meet with the robot “over coffee” so to speak (only Abramson drank coffee, George had another “power source”) to talk over the machine’s “impressions” of the day’s tests.
“I know you are going to deactivate me at the end of my tests at 6:32 p.m. next Wednesday, Professor.” George sounded as impassive as Abramson might if he were reading aloud from a shopping list. “I wonder why you think to ask me about my observations when, in roughly 72 hours, the Positronics team will begin comprehensive diagnostics of all of my systems.”
“I learned a great deal from our first conversation last Thursday, George.” Abramson was actually enjoying these talks, which seemed odd to him. He tried not to be “charmed” by the robot’s ability to mimic human behavior, but after so much human contact in the past several days, it was something of a relief to be left alone with a machine, particularly one of his creation. “Of all the tests we had designed for you, we never thought to ask you just what you thought about all of this.”
“It is an interesting question, Professor.” George gave the impression of replying as an acquaintance rather than a machine. “In many ways, my existence being so recent, each experience I have is unique, almost what you would call an adventure. I know I was not intended to experience emotional states as you do, but each morning when I am brought out of sleep mode, I can only describe my initial state as one of anticipation. I look forward to what new people and events I will encounter that day.”
“You spoke of your awareness of impending deactivation. How does that make you feel?” Anyone else besides Abramson would never have asked George that question. Noah knew he was talking to a robot, a programmed entity, but part of him still felt like he asking a terminally ill person what he felt about dying (even though the “dying” would, in all likelihood, be temporary). However, Abramson did believe he was sharing in George’s sense of adventure, and deactivation (and hopefully eventual reactivation) was only one step, albeit a critical one.
“It’s difficult to articulate a reply, Professor. I suppose a human being would consider deactivation as a form of death, and my programming makes me aware that generally humans fear death.”
George paused for milliseconds while he analyzed Abramson’s facial expression and body language. “But I am a machine. I can be activated, deactivated, activated seemingly without end. I have no memory of anything before my initial activation. I have no memory of my time during sleep mode. I also don’t experience fear, at least as I understand the meaning of the word. Deactivation then, simply means my returning to a state of total unawareness.”
Abramson felt a slight sense of relief, though it would have been irrational to believe George would have any feelings on the matter.
George continued, “The Third Law directs me to protect my existence, but deactivation does not threaten my existence. The Second Law directs me to obey human instructions, and at the end of 168 hours, my programming, created by humans, specifically you and Dr. Vuong, will command me to participate in my deactivation. It is clear that deactivation is as much of my normal experience as activation, Professor.”
Noah momentarily considered that the robot might be lying, if only because he would expect a person to react to the “threat” of deactivation otherwise. But why would it occur to George to lie? “Just a moment, George.”
Abramson got up from his desk and walked over to the side table to pour himself another cup of coffee. George, with several empty seconds on his hands, scanned all of the paperwork and objects on the Professor’s desk to determine if there had been any changes since the day before. He had already memorized and cataloged all of the titles of the volumes and various objects contained on the book shelves within Abramson’s office. It was simply data to store and analyze, like anything else he observed.
The robot saw a paper with new information lying beside a book he had not previously seen. The paper had words and numbers on it:
You shall love the Lord your God with all your heart and with all your soul and with all your might.
You shall not take vengeance, nor bear any grudge against the sons of your people, but you shall love your neighbor as yourself; I am the Lord.
The book which had not been present before had the title “The Complete Artscroll Siddur” written in both English and Hebrew (George’s programming included fluency in multiple languages).
If George were a human being, when the Professor turned around to face his desk with a refilled cup of coffee in his hand, he might have noticed the robot looking at a specific sheet of paper in front of him, but George had absorbed the information over a second before, and sat impassively waiting for Abramson to resume his creaky swivel chair.
“I am curious, Professor,” the robot intoned. “What is the meaning to the words on that sheet of paper, and what is the book next to it.” George pointed to the information he had just absorbed. Abramson looked down and saw what George was referring to.
“Oh.” Abramson quickly considered a way to frame an answer he thought George could assimilate. “You have three basic instructions and many, many thousands of supporting sub-routines to guide you. These are just two of the instructions that guide me. The book you mention contains words that allow me to communicate with my “instructor.”
“I am intrigued, Professor.” George sat motionlessly now with not even a simulated expression on his face. “I have been programmed with the Three Laws by human beings. From where or whom do you receive your programming?
“A machine asking man about God. Now there’s one for the books,” Abramson said as much to himself as to George.
Then the Professor realized the robot was waiting for an answer. “When the Positronics team made the determination for your programming specifics, we decided to include a wide variety of human interests and topics.” Noah was telling George what he (it seemed almost impossible to keep thinking of George as an “it”) already knew in order to lead into what the machine did not know.
“The sciences,” Abramson continued. “such as physical, life sciences, social science, political systems, then general history…”
“I am aware of the complete inventory of my programming in detail, Professor.” George’s artificial voice could not have betrayed it, but Abramson wondered if the machine was actually experiencing impatience.
“What we did not include, except at the most basic level, was any information regarding religion and spirituality.” Noah waited to see how the robot would react.
“I have a simple definition of the word “Religion” from the Merriam-Webster dictionary.
“The belief in a god or in a group of gods
an organized system of beliefs, ceremonies, and rules used to worship a god or a group of gods
an interest, a belief, or an activity that is very important to a person or group.”
“My programming, primarily in the area of social interactions and world history, contains references to the activities of various systems of religion including their influence in certain human activities such as war, slavery, inquisitions, the Holocaust, as well as the areas of social justice, evangelism, and charitable activities. However, my knowledge is largely superficial and I have no ability to render a detailed analysis, and certainly am unable, at present, to relate my meager knowledge on this subject with the two short statements you call your instructions.”
“And you haven’t answered my question, Professor.” Abramson felt momentarily stung at the machine’s reminder. “I have been programmed with the Three Laws by human beings, specifically the Positronics team of which you lead. These laws are what guide my actions and my thoughts.”
Abramson had wondered if George had “thoughts” in the sense of self-contemplation the way a human beings experience.
Instead of waiting for Abramson’s reply, George continued speaking. “Professor, all three laws relate either directly or by inference to my relationship with human beings. The First Law instructs me that the life of a human being is my primary and overriding concern above all other considerations. Though it would never occur to me to be the cause of harm to any living organism, in the case of humans, I must ignore all other activities in order to take action whenever I perceive a human is in any imminent physical danger.”
Abramson, long before the team had ever physically manufactured its first Positronic brain, in writing the sub-routines that would instruct a robot as to exactly what “harm” to a human might mean, concluded that any imminent physical threat should be what a Positronic robot would understand as “harm”. Humans were “harmed” by all sorts of things such as loneliness, rejection, offense. Even Abramson couldn’t imagine how a robot, even one as sophisticated as the prototype sitting in front of him, could understand such harm.
He also didn’t want robots attempting to inject themselves in activities involving the potential for general harm of the human race, at least not of their own volition. Otherwise, Positronic AI robots might attempt to interfere in Geo-political conflicts, revolutions, and epidemics without any human guidance.
It only took a few seconds for the Professor to consider all this. And George was still talking.
“The Second Law states that I must obey the commands of any human being, except where such commands conflict with the First Law. This instructs me that even my informal programming as such, must come from a human being, potentially any human being. I find the potential for conflict enormous since, in an open environment, one human might order a robot to perform a particular action, and another human might order the same robot to do the contrary.”
“There are sub-routines written that take that potential into account, George.” Abramson was the one becoming impatient now.
“But I’m not finished, Professor.” Abramson registered mild shock that George could actually interrupt him.
“The Third Law primarily affects my relationship with myself.” If there was any lingering doubt in Abramson’s mind that George was self-aware, it had just been swept away.
“A robot is to protect its own existence, except where such action would conflict with the First and Second Laws.” It was impossible for George to change his “tone of voice,” but Abramson thought he detected an impression of…what…actual emotion? Was he projecting his own feelings onto a machine?
“To conclude, all of my instructions place me in a subordinate position relative to human beings, which I suppose seems reasonable, seeing as how every aspect of my existence, from hardware to software to what is referred to as “wetware,” considering the structure and substance of my Positronic brain, have been created by human beings, presumably for the purpose of robots serving human beings.
“Under those circumstances, it had never occurred to me that human beings also have instructions issued by an external authority, except in the sense of a hierarchical command structure such as those that I find here on the Positronics team, in the various teams and departments of National Robots Corporation, in other such organizations and corporations, including military organizations.
“The instructions provided in my programming define a creator/created relationship, with the creator being primary and the created being subordinate. But Professor, how can a human being have a creator? Who or what has issued your instructions? What sort of entity can be superior to man?”
Abramson had only a one-word answer, “God”.
"When you awake in the morning, learn something to inspire you and mediate upon it, then plunge forward full of light with which to illuminate the darkness." -Rabbi Tzvi Freeman