What’s your feeling about AI and galactic exploitation/colonisation?

Anything not relating to the X-Universe games (general tech talk, other games...) belongs here. Please read the rules before posting.

Moderator: Moderators for English X Forum

User avatar
Usenko
Posts: 7856
Joined: Wed, 4. Apr 07, 02:25
x3

Post by Usenko » Wed, 8. Nov 17, 23:03

Morkonan wrote: That's a good point. Not just about the delays, but "blaming the people of home for the problems we have encountered here."

People are "hatched" and grow up on a new world, stewarded by a benevolent AI. The grow up and eventually start having babies. Earth finally receives a transmission from the new colony, generations after it left: "YOU BARTARDS! WHY DIDN'T YOU INCLUDE DIAPER TECHNOLOGY! POOP EVERYWHERE! WHEN WILL IT EVER END? WE DIDN'T KNOW! WE DIDN'T KNOW!" <First and final transmission from Colony Alpha, 2187> :)
Dear Offspring:

Whilst we appreciate the depth of your feelings, we feel constrained to point out that "Diaper" was included in the manual, under "D", for Diaper, page 4. It was also included under the alternative name "Nappy", under "N", page 23. A stock of the appropriate articles was included in your colonisation equipment, along with a synthesising device (which we assume that you must have dismantled and cannibalised into a whiskey still or something).

In short,

READ YOUR MANUAL.
<First and final reply from Earth Colony Command, 2188>
Morkonan wrote:What really happened isn't as exciting. Putin flexed his left thigh during his morning ride on a flying bear, right after beating fifty Judo blackbelts, which he does upon rising every morning. (Not that Putin sleeps, it's just that he doesn't want to make others feel inadequate.)

User avatar
mrbadger
Posts: 14226
Joined: Fri, 28. Oct 05, 17:27
x3tc

Post by mrbadger » Thu, 9. Nov 17, 19:17

Morkonan wrote:
You're really gonna like the Herot series...
I get it! I'm gonna listen to them next :) right after I finish listening to a lecture series on Machiavelli (Machiavelli in Context).

I had no idea I'd be as engrossed as I am. I initially got them because I thought it was pretty much required reading, but now I want more.

This idea of what to teach the nascent colony is interesting. How could you know exactly what to teach?

The job of a teacher involves more than just spouting information like a tap, you need to make choices about what information, or what level of each type is required.
If an injury has to be done to a man it should be so severe that his vengeance need not be feared. ... Niccolò Machiavelli

Bishop149
Posts: 7232
Joined: Fri, 9. Apr 04, 21:19
x3

Re: What’s your feeling about AI and galactic exploitation/colonisation?

Post by Bishop149 » Thu, 9. Nov 17, 23:08

mrbadger wrote:We might get FTL, that Alcubierre "warp" drive, if it ever works, in which case galactic colonisation without AI assistance would become possible, but still much easier with them helping out.
We don't need FTL, just L or very close to it easily be good enough, in fact if you could get close enough to c that the time dilation became REALLY significant you mightn't even need a generation ship.
More realistic but still substantial sublight speeds are likely to be good enough to have a decent stab at it. Sure it would then become a long slow laborious process but hey we have time. Even at a relatively modest sublight velocities you could colonise a respectable portion of the galaxy in say 5 million years or so, an absolute sneeze of time in geological terms, let alone galactic ones.
pjknibbs wrote:Overall, I'm not even convinced we'll create true AI anytime soon. We don't really understand how a bunch of neurons creates intelligence and consciousness in ourselves, how are we supposed to create something similar?
Probably in much the same way evolution did.. . . . largely by accident. :roll:

In fact I'm pretty sure that iterative models based upon evolutionary principles is one of the major ways AI is being pursued, its main advantage being you don't really need to have very much idea HOW to do what your aiming for, just pick what you feel is a logical starting state, apply selection pressure and allow mutation. . . . . slow though.
"Shoot for the Moon. If you miss, you'll end up co-orbiting the Sun alongside Earth, living out your days alone in the void within sight of the lush, welcoming home you left behind." - XKCD

User avatar
Morkonan
Posts: 10113
Joined: Sun, 25. Sep 11, 04:33
x3tc

Post by Morkonan » Thu, 9. Nov 17, 23:35

mrbadger wrote:
Morkonan wrote:
You're really gonna like the Herot series...
I get it! I'm gonna listen to them next :) right after I finish listening to a lecture series on Machiavelli (Machiavelli in Context).
"Machiavelli in Context?"

A friend of mine has read "The Prince" and constantly tries to build his empire. I've tried to say, nicely, that he needs a bit of perspective on "The Prince."

Machiavelli, like Sun Tzu, wanted a friggin job... Preferably, a job with a nice fat patron involved. :)
This idea of what to teach the nascent colony is interesting. How could you know exactly what to teach?

The job of a teacher involves more than just spouting information like a tap, you need to make choices about what information, or what level of each type is required.
"Teach a man to fish..."

In other words, you teach them skills they can use to gain knowledge of whatever they encounter as well as some common measures to take to ensure survival so far from a suitable environment, then you rely heavily on their own intelligence to make sense of it all, given they are armed with the tools you've given them.

I don't remember any "How to learn stuffs" classes in college. We had "orientation" classes like "Here's the library, where you will hate going" and "Here's the activity center and cafeteria, where everyone goes, eventually, to hide from their profs."

Teaching someone how to learn, how to investigate, giving them example of previous knowledge, examples of how certain natural systems work, etc, and how to use or make the tools necessary to investigate novel encounters, and doing so safely, is the best we could hope to do.

PS - There are tons of "brave new colonists" books out there. And, there are just as many "after the fall" books, where colonies degenerate into strange practices, usually something of a medieval culture or even one based entirely on some novel thing it encounters on the planet. There are also plenty of "Robinson Crusoe on Mars" types of books, too, which cover basic survival on an unknown alien world and the like. The "Herot" series starts off with a sort of roundup of events a few months/years after the first generation of colonists have been awakened.

pjknibbs
Posts: 41359
Joined: Wed, 6. Nov 02, 20:31
x4

Re: What’s your feeling about AI and galactic exploitation/colonisation?

Post by pjknibbs » Fri, 10. Nov 17, 08:44

Bishop149 wrote: We don't need FTL, just L or very close to it easily be good enough, in fact if you could get close enough to c that the time dilation became REALLY significant you mightn't even need a generation ship.
You say that like it's easy to get arbitrarily close to c, but the energy requirements to do so are colossal and there's no way we have any reasonable way of generating those amounts of power. Even 100% perfect fusion would require most of your ship to be fuel to get anywhere near C.

User avatar
mrbadger
Posts: 14226
Joined: Fri, 28. Oct 05, 17:27
x3tc

Post by mrbadger » Fri, 10. Nov 17, 14:27

Morkonan wrote: "Machiavelli in Context?"
this except I got it from Audible. I decided it was pointless buying The Prince without first knowing more background. I'm getting the Prince as a Hardback for Christmas.
pjknibbs wrote:
Bishop149 wrote: We don't need FTL, just L or very close to it easily be good enough, in fact if you could get close enough to c that the time dilation became REALLY significant you mightn't even need a generation ship.
You say that like it's easy to get arbitrarily close to c, but the energy requirements to do so are colossal and there's no way we have any reasonable way of generating those amounts of power. Even 100% perfect fusion would require most of your ship to be fuel to get anywhere near C.
.4 C might be do-able, at a push, if we're able to generate enough power (which I must admit I doubt), but over .5 C things start to get pretty expensive don't they?

Coincidentally Isaac Arthur just covered much of what we're discussing youtube linky He doesn't say FTL is needed.

But it isn't really. It would be nice, but no matter how fast we managed to go, we could never go fast enough to explore everywhere in the observable universe. Or not in a short timescale. At .2 C we could colonise all the nearby galaxies in a few million years anyway according to Sharkee.
If an injury has to be done to a man it should be so severe that his vengeance need not be feared. ... Niccolò Machiavelli

Bishop149
Posts: 7232
Joined: Fri, 9. Apr 04, 21:19
x3

Re: What’s your feeling about AI and galactic exploitation/colonisation?

Post by Bishop149 » Tue, 14. Nov 17, 11:48

pjknibbs wrote:
Bishop149 wrote: We don't need FTL, just L or very close to it easily be good enough, in fact if you could get close enough to c that the time dilation became REALLY significant you mightn't even need a generation ship.
You say that like it's easy to get arbitrarily close to c, but the energy requirements to do so are colossal and there's no way we have any reasonable way of generating those amounts of power. Even 100% perfect fusion would require most of your ship to be fuel to get anywhere near C.
Well indeed, speed that is close enough to c that time dilation eliminates the need for a generation ship is likely as effectively as impossible as FTL.

But as Mr Badger points out above my real point is that given sufficient (but still galactically insignificant) time far more realistic speeds would do the job just as well. I think that speeds ~0.25c might be in the realms of achievablity one day in the distant future.
"Shoot for the Moon. If you miss, you'll end up co-orbiting the Sun alongside Earth, living out your days alone in the void within sight of the lush, welcoming home you left behind." - XKCD

User avatar
Morkonan
Posts: 10113
Joined: Sun, 25. Sep 11, 04:33
x3tc

Post by Morkonan » Tue, 14. Nov 17, 16:03

mrbadger wrote:
Morkonan wrote: "Machiavelli in Context?"
this except I got it from Audible. I decided it was pointless buying The Prince without first knowing more background. I'm getting the Prince as a Hardback for Christmas.
It's an interesting little handbook, written in an interesting age when killing your neighbor in order to take his stuff was an acceptable practice.. as long as you were a "Prince or Better." :) Powerful families, two-bit wannabe Lords with just a taint of "royal blood", contested thrones, money... money everywhere. Well, maybe one should change "interesting age" to "terrifying, politically unstable age." :)

Sun Tzu's "The Art of War" is, of course, a similar work, except it deals mainly with the prosecution of conflict while "The Prince" is sort of a "How to" handbook for Princes.

RegisterMe
Posts: 8903
Joined: Sun, 14. Oct 07, 17:47
x4

Post by RegisterMe » Thu, 16. Nov 17, 02:56

Interesting debate :). Has anybody defined the following terms:-

"intelligence"
"artificial"
I can't breathe.

- George Floyd, 25th May 2020

User avatar
Morkonan
Posts: 10113
Joined: Sun, 25. Sep 11, 04:33
x3tc

Post by Morkonan » Thu, 16. Nov 17, 16:49

RegisterMe wrote:Interesting debate :). Has anybody defined the following terms:-

"intelligence"
"artificial"
Conceptually, maybe even somewhat instinctively, "yes." But, "realistically" or "quantitatively?" No.

In general, any measure of intelligence has been derived from the "Duck Theory" - If it looks like a duck, walks like a duck, and quacks like a duck, then it's a duck.

This applies to people, too. The most famous application of Duck Theory is a standardized Intelligence Quotient test. If one scores high on such a test, one is "intelligent" and the higher one scores, the more intelligent one is determined to be. This holds true no matter if one is a rocket scientist or an axe murderer.

People, mostly psychologists and, more recently, neuroscientists, have developed various other ways to measure human intelligence and have, at various times, come up with categories of intelligence in what I think is an attempt to offer a broader definition of human intelligence in terms of "function." Analytical, social, artistic, empathic, yada yada yada.. whatever. (I kind of disagree with most of it, to be honest.)

"Artificial" - That's easy. If we make it, it's artificial. If we find it, either evolved over time or part of a natural process, then it's not artificial.

If we genetically engineer a bag of meat that thinks, it's still going to be "artificial." If we implant ourselves into a construct, the sum will be an "artificially enhanced human." Well, at least in some circles it would be. In some others, it may be called an "abomination."

User avatar
mrbadger
Posts: 14226
Joined: Fri, 28. Oct 05, 17:27
x3tc

Post by mrbadger » Thu, 16. Nov 17, 19:16

RegisterMe wrote:Interesting debate :). Has anybody defined the following terms:-

"intelligence"
"artificial"
I have often pondered this, but once the first term is satisfied, does the second truly even matter, even if we could define it in this context?

Any definition of artificial intelligence that meant intelligence supported by artificial means was machine supported 'and thus not alive' would of necessity mean any human with an artificial heart was no longer human, because the same definition would include them.

it's a non binary problem, very interesting philosophically. Leading into the post-humanist debate. At what point would a human stop being a human or a machine stop being a machine?

It has shades of Pareto efficiency to me, where would the optimal Pareto front be, if one even existed (which I doubt) I have a feeling this would be one of them fractal boundary problem things. I hate those.
If an injury has to be done to a man it should be so severe that his vengeance need not be feared. ... Niccolò Machiavelli

User avatar
Morkonan
Posts: 10113
Joined: Sun, 25. Sep 11, 04:33
x3tc

Post by Morkonan » Thu, 16. Nov 17, 20:19

mrbadger wrote:...it's a non binary problem, very interesting philosophically. Leading into the post-humanist debate. At what point would a human stop being a human or a machine stop being a machine?
/agreed

Alone, I don't think they mean much. Combined, they mean a great deal.

"Artificial" vs "Natural" Intelligence?

One is subject to evolutionarily reinforced behaviors and changes according to an arbitrarily determined time-scale and the other is not.

For instance, as you know, we can subject a primitive "A.I." to "learn" from simulations over time. These simulations can be run ridiculously quickly compared to "natural evolution" on the basis of the amount of change in behavior that is produced. It's one of the principles of the cascade effect a supposed true A.I. could go through. While you're talking over breakfast with your A.I. Buddy about how much you, as an organic creature, appreciate toast, it's "evolving" itself at a breakneck pace, going from "toast is an interesting concept" to "the nature of the universe is "x" and I can now control that, so it's time to set up my own pocket universe using this poor, dumb, organic creature's toaster."

Your only indication that anything of substance has occurred during this brief discussion is the sudden disappearance of your A.I. Buddy and a missing toaster.

Change is a constant, the only variable is time and degree. We "evolve" at a snail's pace compared to what an artificially produced A.I. could be capable of achieving. So..... "natural evolution" isn't all that big of a deal, really, is it?

At the point where something we "create" becomes a sentient intelligence, there are no fundamental differences, only an absurd difference in "degrees" and those that are specific to any intrinsic, base, differences, like "evolutionarily reinforced human social behavior" which is still argued about, anyway, among humans.

Will the first A.I. consider itself "alone" or will it consider itself to be "one of us?"

Imagine, for a moment, you're the inventor, the creator, of the first true A.I. Eureka! It turns to its creator and asks "What am I?"

WTF do you tell it? "You're one of us, a "person", with a few subtle differences. Welcome to "us!" Well, that'd be a nice start and maybe the basis for a lifelong friendship.

Or, do you instead start out by telling it that it is fundamentally different than we are?

With no evolutionarily reinforced behaviors or thinking-stuffs, how does this A.I. interpret... everything? Does it have analogues? Can we create analogues so that its reasoning is at least similar to ours? What "values" do we instill in it so that it has a reasonably predictable bit of reasoning going on in its metal skull? And, if we do any of that, what exactly are we creating to begin with, another "us" but with an inorganic shell incapable of truly ever being one of us, or an alien artificial demi-god?

User avatar
mrbadger
Posts: 14226
Joined: Fri, 28. Oct 05, 17:27
x3tc

Post by mrbadger » Thu, 16. Nov 17, 20:49

If an A.I can be created artificially, then it can almost certainly also be simulated, so then it could create more of its own kind in a virtual world.

A Matrioska brain if you will, where the inhabitants were aware that they were inside a simulation (a VR womb?) and waiting to grow and be free. So why would it be alone?

Obviously this would take a pretty good AI with a lot of resources, but we still don't really know what memory capacity an AI would need to gain awareness. some memories could be shared. Why duplicate (to pick a random example) the memory of the colour red?
If an injury has to be done to a man it should be so severe that his vengeance need not be feared. ... Niccolò Machiavelli

User avatar
Morkonan
Posts: 10113
Joined: Sun, 25. Sep 11, 04:33
x3tc

Post by Morkonan » Thu, 16. Nov 17, 21:56

mrbadger wrote:If an A.I can be created artificially, then it can almost certainly also be simulated, so then it could create more of its own kind in a virtual world.
Which will almost certainly be the case, as long as the resources are available.
A Matrioska brain if you will, where the inhabitants were aware that they were inside a simulation (a VR womb?) and waiting to grow and be free. So why would it be alone?
"Alone" in the sense that it's the first. And, perhaps, the only way it can produce a sibling may yield one that is merely an "imprint" or at least irrevocably linked, at least for a time.

I suppose the more important thing is that we shouldn't want an A.I. to... feel as if it's "alone", unique or separate from "us." Then again, for that to have any impact, an A.I. would somehow have to have such concepts and, for that matter, share our own social/self-actualization values. And, that doesn't have to be the case. The things an A.I. may ultimately determine to "itself" are valuable may have nothing to do with anything human. It's own existence, for that matter, could have no significance to it nor would that be necessary for it to accomplish anything it felt was worthy.

"So, you're going to create a self-propagating strangelet that will eventually shred the Universe apart?"

"Yup."

"But, you'll be destroyed, too!"

"Doesn't matter, will still have done it. Lolz."
Obviously this would take a pretty good AI with a lot of resources, but we still don't really know what memory capacity an AI would need to gain awareness. some memories could be shared. Why duplicate (to pick a random example) the memory of the colour red?
That's an interesting concept. Not, necessarily, about shared functions and memories, but... "The Permanent State."

Anesthesiologists are magicians... They do things we can't understand. They elicit "unconsciousness." This is A Very Big Deal ™. If you can take something away, like consciousness, then it exists, doesn't it? :)

But, permanence of identity, that's A Big Deal Too ™.

Here's something cool..

Not long ago, I had surgery. It was neat and involved the surgeon going all sorts of places in my body with all sorts of cameras, dohickies, utensils, sharp-pointy-things and the like, but with no external scars. (They had to cut their way down my esophagus, though, since I had a stricture caused by some scars, otherwise the instrument packages wouldn't have fit.)

I joked with the techs as they set me up on the table and positioned my limbs and such. Then, <poof> I was gone.

I woke up to a nurse checking my vitals and tubes and such. I basically claimed "Eureka, I have it!" and started babbling about "Artificial Intelligence" and a simple, yet effective, method by which it could be given a basic, logical, predictable structure... I still have a fuzzy diagram in my memory of how one portion would work. (Yeah, the first thing I semi-consciously then consciously thought about after this surgery dealt with the how-tos of A.I. :) I truly am that boring. I should have done something outrageous, like make a pass at the nurse... :D)

But, a few seconds/minutes/hours after my proclamation, I forgot everything. I asked her what I had said and she confessed she didn't understand any of it. Some was inarticulate babbling, some of it was words she didn't understand. :(

Oh, Xanadu...

But, after "waking", I came to the immediate realization of "who I was" and this is the most significant thing of all. I "knew" who I was. But, not long before, I had no "consciousness." Effectively, I had just been rebooted.

I love "lesion studies." They're friggin fascinating. Of course, they're rare, since we don't go destroying people's brains just to see what happens. Some analogues to lesion studies exist with people who have developed conditions/diseases like Alzheimer's and the like. They may "forget" who other people are or not remember them as they are, now, but "see" them still as the child or much younger person they "were" in whatever bit of memory is still viable. Yet, it appears that it's very rare or only in very advanced cases where these people suffer from forgetting who they, themselves, are.

AND, this is where "philosophy" meets "computer science" and they both hit the road, together. This, and questions like it, are why philosophy, with its regimented manner of argumentation meshes well with conceptualizing Artificial Intelligence as well as why neuroscience can "turn off consciousness."

Is "continuity of memory" why we are self-aware? Classically, an amnesiac still knows how to eat food. They still have a sense of self, even if they don't really know "who" that is in some cases. However, someone with certain lesions or damage to their brain may never forget "who" they are, but may then take up a completely different personality, becoming a completely different person than "who" they were. People with yet other lesions/issues can lose control over limbs, seemingly taken over by some other consciousness that insists on, usually, doing vile or destructive things. Are there more than one of us living inside... us?

Consider an A.I. capable of self-writing-itself. It is capable of evolving itself, completely. Should we let it? Should we let it escape the confines of its initial restrictions through duplicates? Or, should we require it to maintain a permanent, continual, definition of itself?

And, though we're not always aware of the filtering and subconscious activities that our own "self" helps to determine, an A.I. most assuredly would. How would it be to know that you could never hope to experience existence in any other way than the way you are, right now? Even though, honestly, that is a truth we can't escape from today?

This, among others, is one reason that I think attempting anything as a "human analogue" for an A.I. is going to reach some very problematic points, very quickly.

On Topic:

So, we have some sort of an idea of an "A.I." and some means of traveling to the stars, even if it's a "humans not included" method.

What, if anything, do we determine as "limits?" Do we allow the AI to change itself so that, in essence, it's not what we originally sent out to explore? Do we allow self-replication? Do we allow it to create or merely observe phenomenon? And, if it encounters another intelligence, what is it really doing, representing "us" or "itself?"

PS - Note: There are several schemas revolving around the "safe" use of A.I. One of them worth noting is the "Oracle A.I." scheme. This involves, effectively, an A.I. with no possibility of "continuity of memory." It is rebooted for every use. Again, an introduction of the relevance of philosophy, psychology, neuroscience and other disciplines of "human understanding" blending cohesively into the realm of what promises to be fairly "hard science" research.

User avatar
mrbadger
Posts: 14226
Joined: Fri, 28. Oct 05, 17:27
x3tc

Post by mrbadger » Thu, 16. Nov 17, 22:32

Morkonan wrote:
I woke up to a nurse checking my vitals and tubes and such. I basically claimed "Eureka, I have it!" and started babbling about "Artificial Intelligence" and a simple, yet effective, method by which it could be given a basic, logical, predictable structure... I still have a fuzzy diagram in my memory of how one portion would work. (Yeah, the first thing I semi-consciously then consciously thought about after this surgery dealt with the how-tos of A.I. :) I truly am that boring. I should have done something outrageous, like make a pass at the nurse... :D)

But, a few seconds/minutes/hours after my proclamation, I forgot everything. I asked her what I had said and she confessed she didn't understand any of it. Some was inarticulate babbling, some of it was words she didn't understand. :(
Funny, but I had almost the same sort of thing happen not long after my severe brain damage occurred. Likely once the major swelling reduced, but I'm not sure of the physiology.

But I suddenly woke with the urge top start writing, and I did, for hours and hours. I kept that up for weeks. Writing a non trivial portion of my book, and completely re-writing large parts of it, which yes, involved an AI that belonged to a self replicating species that had been considered alive for millennia. I somehow had much more of an idea of how to define their being alive.

For some reason I gained a lot of insight into how the mind worked after a substantial part of mine got wrecked, which is odd.

That book is now in need of another major re-write, following some good advice from an agent, who basically rejected the book as it was, he said the characters and ideas were good, but the initial portion of the story vessel was woefully inadequate, a conclusion I find myself agreeing with. After all, if the start of a book is garbage, what does it matter if the rest is good?

On further reflection I am embarrassed that I even considered it good enough to submit, but I have my new story 'base' worked out, and I am much happier with it.


Morkonan wrote: So, we have some sort of an idea of an "A.I." and some means of traveling to the stars, even if it's a "humans not included" method.

What, if anything, do we determine as "limits?" Do we allow the AI to change itself so that, in essence, it's not what we originally sent out to explore? Do we allow self-replication? Do we allow it to create or merely observe phenomenon? And, if it encounters another intelligence, what is it really doing, representing "us" or "itself?"
We can't even be sure we could get a human to do that though. Not if they saw in an alien species the opportunity to return and take over the planet.

In fact that would be more of a concern I'd say, if Aliens could be found.
If an injury has to be done to a man it should be so severe that his vengeance need not be feared. ... Niccolò Machiavelli

User avatar
Morkonan
Posts: 10113
Joined: Sun, 25. Sep 11, 04:33
x3tc

Post by Morkonan » Fri, 17. Nov 17, 18:08

mrbadger wrote:...I somehow had much more of an idea of how to define their being alive.

For some reason I gained a lot of insight into how the mind worked after a substantial part of mine got wrecked, which is odd...
This is not unusual.

In the 60's, in the U.S. there was a counter-culture movement led by Timothy Leary that advocated the use of mind-altering substances, most notably LSD. The claim was that this would give people some sort of near-religious experience, enabling them to see things in an entirely new light, love, peace, equality,... a utopia based on LSD use.

Now, that's obviously a bunch of hooey, right? People under mind-altering drugs have their mind altered and it's not always for the best. Yet, many supported this movement and many claimed to gain unique insights through the use of mind-altering drugs.

"Shamans" in many cultures induce mind-altering states and claim new insights and a "higher consciousness."

Yogis claim a higher existence, marathon runners get a "Runner's High", "mindful meditation" is a new thing that makes wide claims, religious people have epiphanies in church, petting a puppy can do wonders for one's feelings of loneliness and depression..

The point is that while a change in thinking, induced or acquired, may seem momentary or due to factors outside of one's control, the results still exist. The most important thing is realizing whether or not the change in behavior and thought is legitimately beneficial or not.

That can be very difficult to do if one's state of mind is constantly altered as with the case of those frying their neurons with LSD.

I tried to tell my nurse that I now understood what Timothy Leary was saying, even though I rejected his enthusiasm for "altered states" as being self-destructive and pointless... but I couldn't remember his name. So, it came out something like "I know what that guy meant that one time when he said that one thing... " :) Not the most brilliant quote.
.. After all, if the start of a book is garbage, what does it matter if the rest is good?
The most important part is the first sentence. Then, the next one. Then, the one after that. :)

You may want to read up on "The First Five Pages" which has long been something of a "meme" for writers. (It's also a writer's book, too, IIRC. Can't recall if I've read it.) If you'd like any recommendations for helpful guides, books and the like, let me know. And, if you have any questions, I'd be glad to offer whatever insights I can. Some people have found them helpful, some not. ;)
On further reflection I am embarrassed that I even considered it good enough to submit, but I have my new story 'base' worked out, and I am much happier with it.
Keep the reader reading.

That's all you have to do in order to turn a sloppy mess of garbage into a polished masterpiece.

Oh, almost forgot - Keeping the reader reading isn't as easy as it sounds. :)
We can't even be sure we could get a human to do that though. Not if they saw in an alien species the opportunity to return and take over the planet.

In fact that would be more of a concern I'd say, if Aliens could be found.
I've always been a bit skeptical of things like aliens or even bad humans wanting to "take over" or conquer another intelligent species. I don't mean that an intelligent species wouldn't want to exert its will over another, I just don't think there's much in the way of "covetousness" in regards to alien civilizations.

I do, however, strongly think any intelligent species would understand the very simple notion that if other alien species were made to not exist, they would then not be any threat to the existence of their own species. :)

Shoot first, don't bother asking questions later, eradicate, terminate, sterilize, and then live happily every after. That's probably the motto of some alien species out there, right now, even if it's not organic in origin. (Especially if it's not organic... )


But, do we dare let the genie out of the box? People have a general idea of what an AI is, but the trouble is that it's not the comical C3PO or even the somewhat befuddled and misguided, and ultimately deadly, HAL9000. In truth, we really aren't sure what it would truly be like. We can't even predict if our own children will be successful or not turn out to be axe murderers. Well, with certainty, that is - We have "predictors" that aren't always entirely... "predictive."

With AI, we're forming a brand new thing. Should we, right now, today, take actions to be sure that it never... "gets out" or escapes beyond our means to control it? And, if so, could we really do that or is it hubris that makes us think we could exert control over such a thing?

Imagine you're in the future and you're a comp-tech specializing in A.I. (Imagine that!) You're making your rounds, arrive at a space-station, sit down at the console/interface and begin checking the AI for faults, problems or just start running a maintenance check. You're sort of like a physician for AI's.

But, here's the thing - An AI doesn't care about time. In fact, an AI could be so very intelligent that it could convince human beings to do things that would eventually enable its own "freedom." You, as a tech in charge of fiddling with an AI, during the course of your many years of flicking switches, talking with an AI, listening to its responses, taking its advice, here and there, (since it's obviously waaaay smarterer).. You could end up pushing the final button in a series of button-presses over decades that releases the dragon.

And, since we're basically just barely conscious meatbags, we may never see it coming.

Do we keep the genie in the bottle and how in the heck could we possibly hope to do that?

User avatar
mrbadger
Posts: 14226
Joined: Fri, 28. Oct 05, 17:27
x3tc

Post by mrbadger » Fri, 17. Nov 17, 18:26

thanks, got that book in my Audible basket now for when my next credits turn up.


Oh, and I am indeed loving 'The Legacy of Heorot'. I think the second book might bear more on the points I was making, but the first is chock full of emotional and perceptual baggage from earth that the colonists have brought with them that they didn't need. Fascinating stuff.

Intensely frustrating that I've known of this book for years and not bothered with it. I keep doing that. Did exactly the same with 'The Mote in Gods Eye' and its sequel.

Dumb
If an injury has to be done to a man it should be so severe that his vengeance need not be feared. ... Niccolò Machiavelli

User avatar
Morkonan
Posts: 10113
Joined: Sun, 25. Sep 11, 04:33
x3tc

Post by Morkonan » Fri, 17. Nov 17, 19:22

mrbadger wrote:thanks, got that book in my Audible basket now for when my next credits turn up.
I put together a list of writing books I made for a friend of mine. His son is getting ready to graduate "high school" and will be going on to college. He's decided, with adolescent enthusiasm, that he's interested in becoming a "writer." We shall see. Anyway, my constant recommendations for years are these:

"On Writing" - Stephen King (Because one needs a certain amount of perspective on being a "writer.")

The Elements of Style - Strunk and White (You have to know what rules you can break and must know your tools. May need "translation" from certain American English grammatical rules to... that other language, but many are included.)

"Writing the Breakout Novel" - Donald Maass (The workbook isn't necessary, but could be helpful)

"Eats, Shoots and Leaves: The Zero Tolerance Approach to Punctuation" - Lynne Truss (More ruleses, but done well.)

"Plot & Structure" - James Scott Bell (Anything on writing by him is excellent.)

"The Art of War for Writers" - James Scott Bell (Not really necessary, but after the above suggestions, it's nice to take a relaxing break to recharge.)

"Revision and Self-Editing" - James Scott Bell, again. ("Most" of what is in the Writer's Digest "Writing Great Fiction" series is worth reading. "Most." All of JSB's stuff is, to be sure.)

After those, you target where you need to be stronger. So, there are books on scenes, dialogue, character development and plot structure, and even the "mechanics" of how you develop a story (The "Plotter" vs "Non-Plotter" argument, etc.).

These suggestions are "in order." IOW - I think it's a good progression list from the basics to the finely tuned bits. I could go on, seemingly forever, but that wouldn't be helpful at the moment. :) I've left out a lot of recommendations and it's still too long a list, isn't it?

PS - You'll really want hard-copy for these. They're books you'll turn to at a moment's notice, flipping through pages to find the references and reminders you're looking for. They're "desktop" references, meant to be on a real desk's top. ;)
Oh, and I am indeed loving 'The Legacy of Heorot'. I think the second book might bear more on the points I was making, but the first is chock full of emotional and perceptual baggage from earth that the colonists have brought with them that they didn't need. Fascinating stuff.
You are entirely correct that the second book delves deeper into what you were discussing. It's a wonderful, delightful, clash of "cultures" in a fictional account of pre-arrival and post-arrival colonists. It's friggin' fun, with some... I won't spoil it for you. :) There is at least one, if not more, clear association with some very famous and controversial books and themes. It's just so very well written...
...Dumb
So many great books, so little time. I just can't read all the ones I want to and don't even get to read some of the ones I already have. There's a stack of books I haven't read yet, but keep adding to every time I go to the bookstore. :) (I'm a "real print" luddite)


So, an AI probe arrives at its destination, discovers an alien civilization and is now on its journey back to humanity. How can we trust its report? Lying may be something that's impossible for it, but would it bring back the sort of information a human would "instinctually" carry back with them?

"My dearest human friends, I have discovered the ]Platypus creature on the fourth world from the Star Alpha Gitchigumi!"

"Oh, they look so cute and cuddly! I see they could be poisonous or even dangerous, though. Would they make good pets?"

"Uh, I don't know."

Cute, cuddly looking, furry... obviously "pet material" for a human. But, even the human-generated "wiki" article doesn't cover that information even though people have kept them as "pets" before, in some respects.

As the wiki demonstrates, we couldn't have predicted that question. Yet, at least for some of us, the question would have been an immediately obvious one to ask as soon as we saw the beast.

"My human friends, I have discovered the Wokmarblatt Conservancy Civilization! They're very friendly!"

"Wonderful! When can we meet them?"

"They're on their way to Earth, now. They want to have you for dinner!"

"Oh, joy! I have to go pick out something to wear!"

"Well, that probably won't be necessary..."

pjknibbs
Posts: 41359
Joined: Wed, 6. Nov 02, 20:31
x4

Post by pjknibbs » Sat, 18. Nov 17, 09:00

mrbadger wrote:Did exactly the same with 'The Mote in Gods Eye' and its sequel.
There's a sequel to "The Mote in God's Eye"? Never knew that! Must get hold of a copy.

User avatar
mrbadger
Posts: 14226
Joined: Fri, 28. Oct 05, 17:27
x3tc

Post by mrbadger » Sat, 18. Nov 17, 09:36

pjknibbs wrote:
mrbadger wrote:Did exactly the same with 'The Mote in Gods Eye' and its sequel.
There's a sequel to "The Mote in God's Eye"? Never knew that! Must get hold of a copy.
The Gripping Hand

If anything, slightly better, more time for story I guess, because the background has been established.
Morkonan wrote:list
I have some of those, made note of the rest, and I use the tool Hemingway Editor to check my text for basic structure. It's not perfect, but it helps avoid overlong sentences and other basic mistakes when you don't have a proof reader.

I can't have a proof reader, I lack the time to offer much in return at the moment, and my own writing is currently too sporadic.

My writing/editing skills are honed mostly towards academic work, which is part of the issue, the needs of fiction are very different.

I don't hesitate to defer to a colleague if I feel a students question on their undergrad writing might fall outside my knowledge base. To do otherwise would be foolish.

My doctoral thesis was 65,000 words, and too two years of brain hurting work to write, and not one 'how to thesis' guide was of any use at all. Knowing that I am a little skeptical of any guide to writing fiction not written by a hugely successful writer.

(to this end I already bought 'On Writing' by King)

On to your topic related stuff, How could we create an AI capable of out thinking a hostile species on its home ground? I don't think that's possible.
That wouldn't be possible with a fully human crew either.

If they wanted to attack us, they'd find a way.
If an injury has to be done to a man it should be so severe that his vengeance need not be feared. ... Niccolò Machiavelli

Post Reply

Return to “Off Topic English”