mrbadger wrote:If an A.I can be created artificially, then it can almost certainly also be simulated, so then it could create more of its own kind in a virtual world.
Which will almost certainly be the case, as long as the resources are available.
A Matrioska brain if you will, where the inhabitants were aware that they were inside a simulation (a VR womb?) and waiting to grow and be free. So why would it be alone?
"Alone" in the sense that it's the first. And, perhaps, the only way it can produce a sibling may yield one that is merely an "imprint" or at least irrevocably linked, at least for a time.
I suppose the more important thing is that we shouldn't want an A.I. to... feel as if it's "alone", unique or separate from "us." Then again, for that to have any impact, an A.I. would somehow have to have such concepts and, for that matter, share our own social/self-actualization values. And, that doesn't have to be the case. The things an A.I. may ultimately determine to "itself" are valuable may have nothing to do with anything human. It's own existence, for that matter, could have no significance to it nor would that be necessary for it to accomplish anything it felt was worthy.
"So, you're going to create a self-propagating strangelet that will eventually shred the Universe apart?"
"
Yup."
"But, you'll be destroyed, too!"
"Doesn't matter, will still have done it. Lolz."
Obviously this would take a pretty good AI with a lot of resources, but we still don't really know what memory capacity an AI would need to gain awareness. some memories could be shared. Why duplicate (to pick a random example) the memory of the colour red?
That's an interesting concept. Not, necessarily, about shared functions and memories, but... "The Permanent State."
Anesthesiologists are magicians... They do things we can't understand. They elicit "unconsciousness." This is A Very Big Deal ™. If you can take something away, like consciousness, then it exists, doesn't it?
But, permanence of identity, that's A Big Deal Too ™.
Here's something cool..
Not long ago, I had surgery. It was neat and involved the surgeon going all sorts of places in my body with all sorts of cameras, dohickies, utensils, sharp-pointy-things and the like, but with no external scars. (They had to cut their way down my esophagus, though, since I had a stricture caused by some scars, otherwise the instrument packages wouldn't have fit.)
I joked with the techs as they set me up on the table and positioned my limbs and such. Then, <poof>
I was gone.
I woke up to a nurse checking my vitals and tubes and such. I basically claimed "Eureka, I have it!" and started babbling about "Artificial Intelligence" and a simple, yet effective, method by which it could be given a basic, logical, predictable structure... I still have a fuzzy diagram in my memory of how one portion would work. (Yeah, the first thing I semi-consciously then consciously thought about after this surgery dealt with the how-tos of A.I.
I truly
am that boring. I should have done something outrageous, like make a pass at the nurse...
)
But, a few seconds/minutes/hours after my proclamation, I forgot everything. I asked her what I had said and she confessed she didn't understand any of it. Some was inarticulate babbling, some of it was words she didn't understand.
Oh, Xanadu...
But, after "waking", I came to the immediate realization of "who I was" and this is the most significant thing of all. I "knew" who I was. But, not long before, I had no "consciousness." Effectively, I had just been
rebooted.
I love "lesion studies." They're friggin fascinating. Of course, they're rare, since we don't go destroying people's brains just to see what happens. Some analogues to lesion studies exist with people who have developed conditions/diseases like Alzheimer's and the like. They may "forget" who other people are or not remember them as they are, now, but "see" them still as the child or much younger person they "were" in whatever bit of memory is still viable. Yet, it appears that it's very rare or only in very advanced cases where these people suffer from forgetting who they, themselves, are.
AND, this is where "philosophy" meets "computer science" and they both hit the road, together. This, and questions like it, are why philosophy, with its regimented manner of argumentation meshes well with conceptualizing Artificial Intelligence as well as why neuroscience can "turn off consciousness."
Is "continuity of memory" why we are self-aware? Classically, an amnesiac still knows how to eat food. They still have a sense of self, even if they don't really know "who" that is in some cases. However, someone with certain lesions or damage to their brain may never forget "who" they are, but may then take up a completely different personality, becoming a completely different person than "who" they were. People with yet other lesions/issues can lose control over limbs, seemingly taken over by some other consciousness that insists on, usually, doing vile or destructive things. Are there more than one of us living inside... us?
Consider an A.I. capable of self-writing-itself. It is capable of evolving itself, completely. Should we let it? Should we let it escape the confines of its initial restrictions through duplicates? Or, should we require it to maintain a permanent, continual, definition of itself?
And, though we're not always aware of the filtering and subconscious activities that our own "self" helps to determine, an A.I. most assuredly would. How would it be to know that you could never hope to experience existence in any other way than the way you are, right now? Even though, honestly, that is a truth we can't escape from today?
This, among others, is one reason that I think attempting anything as a "human analogue" for an A.I. is going to reach some very problematic points, very quickly.
On Topic:
So, we have some sort of an idea of an "A.I." and some means of traveling to the stars, even if it's a "humans not included" method.
What, if anything, do we determine as "limits?" Do we allow the AI to change itself so that, in essence, it's not what we originally sent out to explore? Do we allow self-replication? Do we allow it to create or merely observe phenomenon? And, if it encounters another intelligence, what is it really doing, representing "us" or "itself?"
PS - Note: There are several schemas revolving around the "safe" use of A.I. One of them worth noting is the "Oracle A.I." scheme. This involves, effectively, an A.I. with no possibility of "continuity of memory." It is rebooted for every use. Again, an introduction of the relevance of philosophy, psychology, neuroscience and other disciplines of "human understanding" blending cohesively into the realm of what promises to be fairly "hard science" research.