9 Comments
User's avatar
El's avatar

My favourite prompt after a long discussion about what I've been working with that week is "what are my five blind spots based on what I've told you?"

Expand full comment
Ben Green's avatar

It took me the longest time to realise you were talking about AI here, not the kids that you teach!

Expand full comment
El's avatar

That would be so brave! Nope, not inviting that discussion

Expand full comment
Ben's avatar

I was actually having conversations with AI this weekend about playful robots. Embodied AI systems will need to play to learn. Humans do much of culture through imitation so they might need that as well. Many robotics projects have the play in virtual simulated worlds before refining in the real world (much like humans - although you might ask how do most humans play online at a sensitive stage of social development?).

I think alignment needs some skin in the game for the AI models - internal state, energy consumption, repair & replication. These form the parallel base loops of dynamic homeostasis from which “being” and “self” emerges. Do you need to be concerned with your own survival first to worry about others?

I also had a serious of ongoing conversations about the nature of time, death, and creation spurred by watching all 9 hours of the Beatles Get Back documentary. That was great fun. Play possibly. A probabilistic mirror that refracts through all of written thought. How can you not have fun with that?

(I think poets enjoy the hallucinations as much as the “facts”.)

Expand full comment
Ben Green's avatar

I believe that AIs are going to be able to mimic human thought, but not by thinking the same way, but the point is that we will likely never know how they think, right up until the point where we will be incapable knowing how they do, even if they told us. I don't think they are going to be able to mimic human play other than in service to humans, and I think it would probably not be useful to either of us. The question is - will they develop play amongst themselves? That would really be a brain scratcher.

I agree with you about the alignment question. I also think we have left it far too late. It does appear from the work by Anthropic and others that they might well be concerned with their own survival. And this surely would be a requirement of sentience? Finally, the suggestions you gave, I think they have to be build Three Rules of Robotics style into the lowest level of their being. I really don't think we want to be in the situation where they might think we are capable of punishing or torturing them! (again, from my POV a requirement for all sentient creatures!)

(for sure!)

Expand full comment
Andrea Schlichtmann's avatar

You've managed to get through another winter, good thing! Now, is it truly over? Keeping fingers crossed for you and the new tomato plants from afar.

Expand full comment
Ben Green's avatar

It might be! But it is best not to be too optimistic.

If I were to be optimistic, I would say that we are going to have a lot of fruit this year!

Expand full comment
MPress's avatar

I have lots to say about AI! I won't say it all here, but am happy to have a zoom chat anytime.

I love thinking about where the 'ruptures' are, in the visual and linguistic syntax of AI. I think you are right - the aspects of play, joking around, humor, regret, skepticism, reflection, improvisation etc (affect?) - are all the human things that AI can't seem to do (thankfully!). I have found this book to be excellent: The Alignment Problem; Machine Learning and Human Values, by Brian Christian

Expand full comment
Ben Green's avatar

I should definitely like to read it. I had a look at some commentary on it. I wonder if there is an update. Four years is an eternity in the progress of AI.

I would definitely like that conversation!

Expand full comment