10 Comments

Really interesting stuff, even though there's a good bit of it I'm having a hard time wrapping my mind around fully (a me problem, not a writer problem). Your writing always leaves me asking big questions. Thanks.

Expand full comment

I don't like the "pride" part of the beginning. How can you be proud of something you had no role in? And if you can, then why not feel shame for horrible things you also had no role in?

At best I think you can be grateful that humans invented certain things, but also be extremely cautious because of some awful things humans also invented.

Expand full comment

This is all pure speculation based on science fiction.

Expand full comment
Sep 3Liked by Katherine Dee

What is?

Expand full comment
author

Seconding this question

Expand full comment
author

What is? LOL

Expand full comment

1. Let's say you're right : somebody will immediately ask it to destroy the world, as soon as possible. Possibly as a joke. People are already doing that (see ChaosGPT), I don't see any reason they would stop when we get too close to the dangerous frontier on intelligence

(And, adding to that I believe we're not that far off from systems becoming agentic on their own, turning what we thing are safe inputs into non-predictable goals)

2. Give it two years, then. Context window has been growing a lot, no reason it should stop. Add it to the pile of "AI will never [do X]" predictions that get blown out in a matter of months.

3. Same as 1) + prompt engineering is a temporary hack, that's already much less necessary than a few months ago

4.

Expand full comment

That was one of the most well written posts I have had the pleasure to read. However the doom mongers seem to be, to use a meme -

1. Some text comes out of an LLM (Large Language Model)

2. ???

3. DOOM!

Let's look at some important aspects of these LLM systems -

1. They have no volition. Without a will, they are bound to create outputs based on the inputs and training.

2. They have no memory of their inputs or outputs unless they are fed back into their input.

3. They are not intelligent, one has to craft the input given to them to activate the latent space inside the model to get the correct output, also known as prompt engineering.

4. The systems deal in word fragments called tokens. They have no concept of the real world except through tokens.

5. With all the limitations above they can still be useful, standalone, or increasingly as components of larger systems.

Finally I have to ask about the Effective Altruism movement which is deeply connected to the doomers and to criminals like Sam Bankman Fried. I'm not in that milieu, but it seems to an outsider like myself to be cult like and rather concerning.

PS Yes I'm working on LLMs and robots and yes I have low level safety systems that make it impossible for the system to violate any of Asimov's Laws. I'm not an idiot.

Expand full comment
Sep 2Liked by Clinton Ignatov

- The main Blackpill of Yudkowsky is about humanity being physically wiped out by AI – probably all at once, without warning. Because the race is on, and some people are hiding or sabotaging the brakes – it will be amazing right until we all die suddenly.

- The hidden Blackpill behind *this* is that, even if we avoid that fate by some miracle, humanity probably loses all control and gives it to (one or many) AI.

- And the underlying Blackpill is that these AI, what our sacrifice will bring to the future, might not even be conscious – supersmart, powerful, but incapable of *enjoying* art or beauty or love. Not even a worthy successor or artificial children – but mere (supersmart) programs or disease.

So I don't see how the possibility of being wiped out by *open source* instead, would be a source of optimism.

Expand full comment
author

There is so much work cultural-work needed to be do done before we can even talk about endgame scenarios rationally. There is still plenty of damage that we can do to ourselves through ignorance before then, and our ignorance can only hasten such outcomes.

Also, “open source” is what I'm railing against. I mean *Free software*, user-centered education and control. If we're all primes to give away our autonomy anyway, that makes any AI's manipulations all the easier. We *can* do a lot to mitigate our vulnerabilities, most of which are paychological, not technical. And yeah, "open source" does jack-shit there.

Expand full comment