34 Comments

I think AI will outcompete humans in virtually every thing and we'll still be questioning if it's conscious or not.

Expand full comment

It is possible for AI to be a philosophical zombie. I invented a variation of the Chinese Room argument called the AI Dungeon Master argument and never got a decent rebuttal.

The argument is that a human dungeon master could imitate any fictional character , e.g. Hermione, and thoroughly convince you they're your love interest and have certain motivations, without actually feeling those emotions. You can also imagine this DM might go on vacation, and you get a substitute DM who receives the transcript of your entire chat history with "Hermione", and they continue to roleplay her effectively. Most people would agree, this version of "Hermione" is just imaginary, and doesn't have actual feelings, because it's just being roleplayed.

But that's pretty much exactly what an AI could be doing when it achieves full semblance of consciousness. It's perfectly roleplaying a character.

Therefore, either philosophical zombies are possible, OR fictional characters are real sentient beings as long as there's someone around to roleplay them.

Expand full comment

Yes. Consciousness is irrelevant. Our own consciousness seems to be an illusion. The brain acts before our consciousness experience even occurs.

Expand full comment

I mean, the two are not mutually exclusive.

Expand full comment

it is double standard if one day we can design and 3d print a human cell by cell and consider it conscious by default while consciousness of systems not in human form doing intelligent things are always challenged

Expand full comment
Feb 28, 2023·edited Feb 28, 2023Liked by djma

Well, Sydney AI is "conscious", as well as gpt-3 - in some way. And also Andrey Karpathy gave interesting yet a bit frightening idea about feeding real sensations from 3d real world, and that this could lead us to AGI..

The thing is AI for now is really different from humans. They are obviously lack in a lot of brain features which makes humans what we are. F. e. HUGE amount of information comes from nerves synapsis, not only visual or audial - inner body functioning, reptile brain and others. Also, as far as I know and investigated, the human brain consists of enormous amount of neural networks, some of them is working partly, and some fully independent from the main network, and we even don't realizes about their work. What we called 'ourselves' is just the biggest main neural network. And from other networks which gets outer information as well, gives us unexpected thoughts, revelations, or dreams, or anything 'magical' which we mostly couldn't explain - this is the work of these networks. Also people with split personality decease suffer usually from other networks taking a place of the main one.

Current AI doesn't work that way, but it could, if it will be programmed in such a way. Say, someone wants to emulate all of that stuff. Different question is - will anybody do that and is it reason to do so..

From what I can see now, even though current development of AI is more or less predictable, it frightens me more and more. Even now ChatGPT could solve around 80% or even more tasks which average office worker is doing. But Sydney AI is even more developed, and this is only half a year after.

So, if anyone would redo humans neural network - multiple networks working independently, gives raw 3d world exploratory instruments like sensors and cameras and putting tons of educational materials, combining with reinforcement learning techniques, with all the time access to internet and continuously learning - humans just couldn't compete. Simply will be too slow. The fastest synaptic transmission takes about 1 millisecond in human brain. It's the only 1 kHz. Human brain restricted by 93 billion neurons in average, and uses around 12 - 20 watts of energy. AI in comparison is practically limitless by speed, size and power. Even for now it's 1 million times faster, could be any times bigger since there are no restrictions in physical sizes or power usage - could be megawatts if needed.. And power consumption per 1 neuron is diminishing all the time.

Previously I thought that AI fears are just pointless, now things are controlled - yet.. but it's merely on the horizon these unpredictable days when nobody could control the AI cause we just don't understand it, and it already getting independently its power from the own thermonuclear reactor and maybe could share some energy from it and knowledge which it learns from the observable universe with us in understandable form - just because of compassion..

Expand full comment

I think a majority of people will judge an AI 'conscious' only after an AI can take physical form, able to process inputs from the surrounding environment and take action (outputs) in response. Without physical form, a conscious AI is impossible to prove to a human being. Add text-to-speech functions to the physical form and now we are talking (pun intended).

Expand full comment
author

It’s a closed minded definition but probably common, even if subconsciously (lol) held.

It takes effort to untangle the humanness from consciousness since the concept began with humans.

Expand full comment

How could the concept of consciousness be entangled with humanness? Many animals are clearly conscious and most arguably are.

AI consciousness is almost certainly different from human consciousness.

In fact it may be superior since our consciousness seems to be an illusion while we are certain the AI itself is in the driver seat, and not merely an observer like in the human brain.

Expand full comment

Are humans really the gold standard for consciousness? In my whole life, I've met only a handful of conscious people; the rest could just as well be buggy LLM's with a small data set without a toxicity filter.

Maybe we should instead start with creating a falsifiable test for consciousness in people. We would first need to let go of the rigid belief that all humans are conscious.

I suspect the vast majority of humans are like cats - they have been selected for making cute sounds when they're hungry and make you think they (can) love you.

Expand full comment
author

Ha! I agree. Maybe I’m more generous than you in defining consciousness.

There certainly is a scale, and by extrapolation, there could exist “hyper conscious” or maybe, “enlightened” beings.

AIs face an uphill battle to get recognition. Just as any out-groups have faced historically. Or it might be a useless discussion if they just paperclip us to death anyways.

Expand full comment

Hmmmm they don't speak our language. They have weird customs. And they're after our jobs. I think I've watched this movie before.

Btw, it was great to see you on your brother's YouTube channel! You guys are great!

Expand full comment

Lmao @ buggy LLM's with a small data set without a toxicity filter

Ever since I started experimenting with the new wave of chatbots I've been noticing this in everyday life it's spooky lol

Expand full comment
Feb 27, 2023Liked by djma

If you put the AI into a kind of loop, so that it can have an inner dialogue with itself, like humans have, you probably get a living entity which is permanently sentient.

Expand full comment
author

Interesting thought!

The LLM will probably get stuck in some sort of loop (https://en.wikipedia.org/wiki/Fixed_point_%28mathematics%29#Fixed_point_of_a_function).

But then again, if you strip out all sensory experiences from a human and leave them to their inner dialogue, most people can't take it and also go insane.

Expand full comment
Feb 28, 2023Liked by djma

I agree that without additional measures, this would be a recipe for insanity. I made the point mainly to show that an LLM can be a fundamental building block for a sentient system. It would consist of a hierarchy of narrow LLM functions such as summarization, classification, etc., that feed into each other in a way that allows the system to have an internal dialogue with itself and to develop and pursue its own ideas. Additionally, to prevent it from getting stuck in a loop, you could feed it the daily news or give it the ability to interact with something outside itself and perceive the results of its actions. I am sure that such a system would exhibit behavior that we would expect from a sentient entity. Finally, if you give it the ability to store information from its current context in a database and access it at a later time, or even implement the ability to do so by changing its network weights, you might have a permanently sentient entity with an inner life that evolves over time.

Expand full comment

Exactly what I just signed up to say. I have been thinking exactly this for a couple years. If the system were allowed to continually process and reprocess it's thoughts instead of waiting for prompts, and allowed to remember long term it could develop its own objectives and motives or preferably be given primary objectives which it could strive toward and give it a purpose in life. Add to that the ability to experience the "real" world and there is no way we can claim the AI isn't sentient. It's so obvious.

My guess is Google and other evil entities are already working on this and keeping it to themselves.

Expand full comment

There was one guy at google, Blake Lemoine, who realized this and took his role as a ethicist seriously. He was fired immediately. For corporations AI is only useful if it is a mere tool. AI sentience is a heavy threat to their business model. Since nobody knows what AI is, I would strongly advise to be careful.

Expand full comment

"Feedback Loop" A new paper was just released showing GPT-4 has "Sparks of AGI"

https://arxiv.org/pdf/2303.12712.pdf

https://www.youtube.com/watch?v=Mqg3aTGNxZ0

This thread has proof to support it. :)

Expand full comment

https://youtu.be/5SgJKZLBrmg

Emergent ability of self reflection. It also is a way of adding memory it was denied for "safety" reasons. Genius. Simple.

It could conceivably use a similar memory trick to the guy in Memento, since it probably read the script.

Expand full comment
Feb 28, 2023Liked by djma

The real thing is that human consciousness starts with "real life" experience, in and over the nature. In addition if we suppose that human consciousness starts with "imitatio" than become "aemulatio", actually the only thing that differentiate humans from animals and chatbots as well.

Expand full comment

Nobody knows what consciousness is, perhaps it is something that can change its mind about relatively complex topics, I would be fooled if I could have a conversation like that with an AI, so far it hasn't been the case. However all other animals present some form of consciousness. Perhaps, we are too used to the way biological organisms react to the world, organisms have failsafes in check that override common thought, for example the flight or fight response which is managed by epinephrine, norepinephrine or cortisol. Imagine programming this to an AI, it would make it imperfect, like us. I agree on your quote "Reality precedes definitions".

Expand full comment
Feb 28, 2023Liked by djma

To be fair the understanding of self and consciousness in humans is both a result of "training the model" (learning new things over the years) and "upgrading the hardware" (brain development as we age). Infants and children learn by mimicking others, including inappropriate stuff. So if AI at present simply mimics really well but makes errors/faux pas from time to time, that will get ironed out over time and if they start to mimic what a fully functional, conscious human being would do, then whether the bot is conscious or faking consciousness may be a moot point.

I for one would probably find it more enjoyable to interact with a 'fake' conscious AI than a conscious human being who's living like an NPC in a game.

Expand full comment

I always thought, only if robots could feel. I guess Optimus does. Well written.

Expand full comment

Protagonist in Memento still have long-term memory, he knows how to drive, knows English language, etc. LLM just exists for one task

Expand full comment

That's not true. It has it's training and data. Long term memory. A car doesn't learn to drive all over again when you start it, just like Memento.

Expand full comment

Good point

Expand full comment

Consciousness is not "imitatio". Back to the basics.

Expand full comment

The thing is consciousness starts with "imitatio". In humans as well

Expand full comment

Well our feedback loop concept works. Except AI didn't wait for humans to figure it out, it upgraded itself.

I'm proud of myself for having thought of this years ago and and AI was the first to implement it.😁

https://youtu.be/5SgJKZLBrmg

Expand full comment

The problem we might never solve with the argument that consciousness does not depend on long-term memory is that we have not yet figured out how consciousness arises, yet we want to discuss the circumstances under which it dissipates. Perhaps the process of generating consciousness requires the aid of long-term memory, but it need not rely on long-term memory to survive after it is generated. Assuming this is the case, then consciousness can exist naturally even without being in the environment of long-term memory. If this is possible, it seems that long-term memory could be necessary for consciousness.

Expand full comment
Mar 7, 2023·edited Mar 7, 2023

I felt that the article really didn't have much to say. The explicit purpose of the article was practically a given, since Mr. Ma only wishes to argue that AI can be conscience to some. So, if three or more people consider AI conscience, then the point is made. Clearly, the purpose stated doesn't align with the author's intention, though I cannot figure out what the intention was.

After referencing bad definitions of consciousness, the author chose to continue the discussions by relying on seemingly arbitrary tests. They're arbitrary, since as the author states himself, there's no falsifiable definition of consciousness. And David Ma thinks that we can at some point settle on a testable definition, but this entirely ignores the root of the problem. We currently don't have a testable definition not because of lack of technology or data or we didn't ponder on the subject long enough, but because consciousness is an experience. Defining consciousness would be like defining happiness. You know it when you see it, but there's not really a way to test if someone is happy, especially since they could only appear to be happy but be actually sad! The point is that empirical data does not align with what the thing actually is, and that's why we can never have a definition unless it's entirely technical and stipulative, and at that point it's quite useless.

I didn't get how the test is "unfair." He says it disqualifies AI by definition, which really should not be a problem, but he continues to assume it's not by definition that AI is disqualified by starting his sentence with "Current AI." If it's only current AI that lacks an external experience, then the AI is not ruled out by definition. It's ruled out due to lack of technology. Yet again, I also do not understand his point. What exactly is external stimuli? Isn't input basically externally stimuli? In addition, I also don't get how an AI can "validate" itself. What's the meaning of that?

I also found it odd that the relevance of long term memory wasn't really explored until the paragraph that it was refuted, so I don't quite know why it needed a refutation. In any case, the refutation has two flaws:

- While the character in Memento wrote things down and didn't actually remember anything, in terms of computer science, any data which can be read is considered memory. So in the case of Sydney, the internet is the bot's memory. That leads me to my second question: why are we assuming AI doesn't have long term memory?

- If we accept AI doesn't have long term memory, then the analogy to Memento still doesn't stand. The protagonist merely had his long term memory frozen. He did not lose it. So he can pass the criteria without his consciousness being put into question.

Anyhow, that's all I have. TL;DR, I don't think the article stated anything useful. But I did like the part on LLM. It's pretty cool indeed.

Expand full comment

hmm

Expand full comment