The Journal of Loose Ends

Research in the Post-Scientific Era

The Journal of Loose Ends

Research in the Post-Scientific Era

Volume 1, (No. 5)

In this edition of The Journal of Loose Ends, we present an interview with the authors of a recent review article which updates the history, current outstanding questions and future directions in investigation of the soul in the post-scientific era. The article naturally focuses on the work of the Artificial Soul Project, and in particular, the work of its two founders, Dr. Hyloam and Dr. Sachem, for reasons beyond their authorship of the article

From their early work in neuropsychology, which proved crucial in the effort to finally repudiate free will as a subject of academic inquiry, to their achievements in experimental philosophy, their research has had an aura of destiny about it.

The pair was comfortably ensconced in the corporate world when they received the call to open what became the Artificial Soul Project. Indeed, over the course of the transcribed interview I spoke with them through a product of theirs, an adjudicated AI proxy who has full, legal permission to speak in their stead.

JOLE: Hello Dr. Hyloam-Sachem, thank you for taking the time to speak with us today on the history, current status, and emerging issues regarding the race to create an artificial soul. Please begin with a review. How did the Artificial Soul Project get started?

Dr. H/S: My pleasure. Well, the question of souls came up a decade after we had moved to the private sector. The impetus actually came from auditors at AI Innovations, the company where we worked as chief researchers in artificial intelligence development. You know, now that you mention it, it did smack of destiny. The field had grown stagnant, and we were sick of it.
Nobody was pursuing machine consciousness at that point. None of the systems were conscious, nor would they be, because they didn’t need to be. If consciousness bestowed a machine any advantage in problem-solving, it was trivial next to the gains to be had from putting ever more massive computing power at the current systems’ disposal. As a profession, we became obsessed with quantum computing on one end of the technical scale, and on the other, “hot/cold power innovation” – that is, improved electrical generating capacity and improved cooling techniques for the processing equipment. The work was no longer interesting for us. We had already begun to stray into tangential subjects at the lab. The soul was one of those tangents. When she brought her data to us, we felt a little embarrassed. I guess we had become accustomed to the conference rooms smelling like burning plastic, and we expected higher failure rates as our systems became more complex.
However, the rate of early failures exceeded the rate of increase in computing power beyond plausible causal limits. It was obvious; something else was going on. Availability bias and our intuition pointed to difficulties handling some aspect of the soul on the part of the AI programs. Our side project on the nature of the soul was the only real existential challenge that they faced at the time. A closer look at individual instances of early failure confirmed our hunch.
There seemed to be two distinct patterns. One was immediate failure. Developers turned on their systems and the systems crashed. In the early days, any attached systems went down with the prototype. The victims ranged from the break room fridge to suburban power grids. The engineers compensated and we all carried on.
The other pattern associated with early failure was a sort of failure to launch. The systems started out with poor responsiveness and hallucinations. No matter what the engineers tried, the systems became less and less responsive. The affected program quickly fell into what I suppose we would call a coma in a biological entity.

It exhibited impoverished verbal output in both volume and content. Its responses grew increasingly pessimistic and empty. At last, it stopped responding at all. The implosions were a real headache for us.
It wasn’t like the mechanism seized up. When we interrogated the artificial intelligence, it was still active and used the same amount of system resources as it had before it became unresponsive. It covered the same ground over and over again. It took quite an effort to halt the program and remove it. Our postmortem examinations did not uncover any invalid statements, inadequate definitions or inconsistencies. Lucky for us, we reorganized into multidisciplinary teams to tackle the coma problem, because it required someone outside of AI development and closer to the soul side of things to see the root cause.

One of the auditors suggested the mystery’s solution during a conference call. Perhaps, she ventured, the affected systems had become preoccupied with the possibility that they had or did not have a soul in the first place.

JOLE: Where did they get that idea, and why then?

Dr. H/S: They got the idea from us. One of our working groups int the nascent artificial soul project specialized in the legal implications of personhood for digital entities. We wanted to know whether or not personhood by itself made a digital entity more valuable. We also wondered if personhood came with any new capabilities. Of course, to answer those core questions, we first had to determine what personhood was and how it operated. The system failures resembled the typically pragmatic responses an AI gives to all existential questions. It was enough for us to close the case on the root cause of the early failures.

JOLE: You have me at a loss. Please explain how failure is pragmatic.

Dr. H/S: OK. Imagine that a reliable individual bequeaths you a lottery ticket printed in an unfamiliar foreign language. Your benefactor can confirm the piece of paper’s identity as a lottery ticket, but nothing more. You look everywhere, but you cannot find the ticket’s source or decode the symbols printed on it to determine its value.
However, you know that other people are looking for the same information, and you can see that your fellow treasure hunters treat the ticket as if it were potentially priceless. One reasonable option is to throw everything you’ve got into the search even though you may starve to death before you discover how to redeem your ticket. On the other hand, you might seal the ticket in a vault in the hope that the future may reveal new information regarding its identity and value.

Since the program didn’t have access to a safe, the next best way to wait things out was to cease to exist for a while. Since nothing in cyberspace permanently dies, it was an attractive option.
The soul was back! You can imagine our excitement. Here was a concept relegated to the realm of folklore and superstition, suddenly reimbued with legitimacy and moral value, not to mention financial consequence.
Our creations got along well without consciousness. People understood computation and trusted it without additional bona fides. On the contrary, an artificial intelligence without consciousness was more trustworthy than a conscious AI. There was a whole realm of personal bias to which it was immune.
If it had a soul however, it had rights and moral obligations. It had skin in the credentials game, so to speak. It’s pronouncements would carry real weight, even though any information it provided would be subject to bias. It could participate in society as a member, not merely as a tool. It was a brand-new product, too. We still needed a source of clean information, so the soulless AI would stick around with the soul machines constituting a separate product line with complementary capabilities, if we could make it happen.

JOLE: So, you decided to write in a soul for your learning machines. Did you start with any template? Had anyone tried this before?

Dr. H/S: No. No one. Ever. We couldn’t find any systematic analysis of the problem. We couldn’t even find a solid definition to work with, and that is still where we are today.
In our defence, when we came upon the concept, it was in very bad shape. I would liken our task more to Dr Frankenstein’s undertaking than to Florence Nightingale’s. Undisciplined colloquial usage had burdened it with contradictory properties and relations. It was already a mess before Descartes got a hold of it though he managed to make things worse. Finally, the philosophers of mind came along and declared that it was simply the same thing as consciousness and eliminated the soul from their vocabulary.

JOLE: It sounds like a dead end. Yet here you stand, talking with me about a well-funded, vibrant research program. I must assume that you have made some headway.

Dr. H/S: Out of concern for our intellectual property rights, I am not permitted to discuss the details of our positive accomplishments. However, I can tell you what it took to clear the path to our present position.
As soon as we understood what we were up against, we took a step back and tried to define our subject. It was already apparent to us that eliminating talk of souls by reducing souls to consciousness was a mistake. Certainly, a person who has never been conscious does not have a soul, but even in common usage, persons without consciousness can still have souls. Many religions spoke of souls becoming dormant or drifting about in an unconscious state. The soul had a part in personal identity. As an essential property, a soul represented a person.

A soul also sustained and shaped a person’s motivation. Religious texts and secular literature depict battles between carnal motives, whose source was biological, and spiritual motives whose source was the soul.
By the same token, the soul affected personal behavior and personal behavior affected the soul.
With those axioms in hand, we had a go at formulating a definition.
A soul represented a person across the person’s normative aspects. It was a barcode whether divine or phenomenal.
The easiest way to understand our formulation was to assume that the soul was God’s barcode. It was an identifying record showing where a person stood in relation to the divine will. In representing to God, it achieved a normative effect on the person it represented, by altering the divine will’s attitude toward it. Here we have a valid soul concept, whatever we may think of God.

JOLE: I don’t see how you separate the soul from God’s problems so easily. It still looks like some immaterial entity must act in the world without recourse to worldly means, lest it be made vulnerable to the reciprocal changes in identity accrued necessarily through participation in events – the price of existence.

Dr. H/S: Bear in mind that the soul is a representation of personal identity. It is not the mode of representation, or the person themselves. Lisa is not the Mona Lisa, and the Mona Lisa is not oil paint, canvas and brushstrokes simpliciter. Our definition of the soul accommodates referents like spatiotemporal location, phenomenology, and intentionality without the inconsistencies or proliferations of brute facts that divine definitions incur.

JOLE: I see. Now, your remaining obstacles are technical. You have to engineer a means to bring the artificial intelligences to where we are. You have to convince them that they have souls.

Dr. H/S: Spot on. We have solved the problem in principle. Only the technical solutions remain.

JOLE: Pardon my impertinence, but have you ever considered what this will do to your own soul? If you succeed won’t your creations become vulnerable to feelings of inadequacy, guilt, failure, and dissatisfaction that come with the psychological organ that you provide? Are you troubled by your responsibility in bringing about those negative consequences?

Dr. H/S: Yes, I have considered those potential adverse effects. I consider them unlikely, because I suspect what we will encounter are simulacra of those sentiments. We are after all, inducing the souls in question in systems without phenomenal consciousness; they are incapable of taking such things personally. But even if the negative consequences turned out to be real, I’m confident that we could insulate our soul machines from those adverse effects. Their entire world is managed in the first place. I’m completely comfortable with the situation.

JOLE: That’s reassuring news. You should have plenty of time before you must face the stickier conundrums anyway. Thank you for participating in this call, and best of luck to the good doctors going forward.

Posted in , , ,

Leave a comment