Episode #106 – AI Propaganda and the Illusion of Intelligence
Recorded November 28, 2025
Description
Artificial intelligence is far from neutral; it operates as a powerful attention engine, a formidable propaganda tool, and a mirror reflecting the interests of those who control the code and data it consumes. In this episode, I thoroughly explore the language and concepts surrounding “philosophy,” “intelligence,” and “artificial intelligence,” grounding the discussion in fundamental principles.
We start with simple examples: coffee versus “flavored water,” circles versus squares, and fire versus burns to show how flawed definitions can distort our understanding of reality. I then contrast mainstream definitions of intelligence, which focus on acquiring and applying knowledge, with a deeper philosophical perspective that emphasizes the balance between order and chaos, the pursuit of true wisdom, and the importance of changing behavior based on experience rather than repeating past mistakes.
With this lens, we critically examine AI, focusing on large language models that remix internet text, the creation of consensus illusions through the repetitive amplification of a single source, and media ecosystems that prioritize engagement over truth. We discuss CGI and deepfake-level realism, highlighting the alarming trend of treating synthetic images, videos, and narratives as “real enough.” Additionally, I delve into the realms of robots, avatar labor, and how human skills can be captured, replicated, and scaled through centralized data centers.
In closing, I tackle the pressing issues of AI ethics, AI transparency, and open-source AI. The real struggle is not between “AI and humans,” but between centralized, biased systems and individuals who think critically, reflect deeply, and uphold principled values. Philosophy, wisdom, and moral integrity are not optional; they are vital defenses in a world increasingly dominated by convincing illusions.
All Videos Hosted on Bitchute
BitChute is a peer-to-peer content-sharing platform and associated services. BitChute aims to put creators first and provide them with a service that helps them flourish and express their ideas freely.
Video chapters:
- 00:00:00 Philosophy, AI, and Why This Isn’t a Tech Demo
- 00:03:00 What Philosophy Actually Is (and Isn’t)
- 00:07:45 Coffee, Lemonade, and the Power of Concepts
- 00:11:40 Circles, Squares, and Defining Reality
- 00:14:35 What Is Intelligence? My Definition vs Theirs
- 00:19:50 Fire, Pain, and How Real Learning Works
- 00:24:30 Can Machines Ever “Understand” or Only Obey Code?
- 00:29:40 Free Will, Determinism, and Rewriting Your Own Code
- 00:38:20 Creativity, Bands, and Who’s Really Programming Whom
- 00:43:50 Consensus, Clickbait Science, and Garbage Data
- 00:55:10 CGI, Dinosaurs, and When Illusions Beat Reality
- 01:02:05 When AI Feels “Real” and Most People Can’t Tell
- 01:04:35 Robots, Avatar Labor, and the Future of Work
- 01:11:10 Anxiety, Morality, and Why Pure Efficiency Is Inhuman
- 01:16:55 Biased AIs, Dog Owners, and Awards Shows
- 01:22:50 AI as Advertising, Plato’s Cave, and Staying Sane
- 01:26:20 Open-Source AI, Decentralization, and Final Thoughts
Note to Viewers:
As a speaker who explores unconventional viewpoints, I understand that not everyone may agree with my ideas. However, please share your thoughts and respectfully dialogue with me. Through these conversations, we can gain deeper insight into the world around us.
We appreciate you joining us on this enlightening exploration of societal complexities. Your [Support] means the world to us, and when you [Donate], it keeps the show going.
Supporting Our Decentralized Show:
It’s crucial to note that this entire show operates in a decentralized manner, made possible only by generous donations and viewer tithes. Your contributions are invaluable and empower us to continue delivering thought-provoking content.
If you want to support our mission, please visit our donation page. Your contributions significantly sustain our platform and enable us to bring you more engaging and insightful content.
Thanks for being an essential part of our community. Together, we are making a positive impact.
Warm regards,
Dave
Transscript
Transcript:
(00:02) You know, maybe I should set this up a live stream so I actually get on here. It is What is it? It’s November 28th, Friday, 2025. It’s about 8:54 a.m. for me, Eastern Standard Time. And my mic is way over here. So, let’s put the mic here. I’m not editing this. All right. Well, welcome back.
(00:42) I’ve been thinking a lot about AI for a while now. I guess I’m supposed to make eye contact with this Cyclop lens here, but this is just a peak. This is not it’s not a show like uh excuse me audio listeners. I am taking a sip of my coffee and cleaning my mic. This is not a show for entertainment purposes.
(01:12) I guess it’s a show forformational purposes, but I do guess I got to be entertaining for you guys to continue to come back and listen. But I wanted to talk a little bit about AI today or a lot of bit about AI. And where do we even start? Because I’ve made this video a dozen or so times. I just haven’t recorded it like this. uh when you’re deep in thought, right, and I’m working out these proofs step by step and then you get sit down in front of the camera and you’re like, “Okay, where do you start to explain this to somebody from the outside?” Because I’ve
(01:50) been thinking about this for a while now, right? And so for me, it’s this huge conversation and how do I jump into the conversation and pull you into the conversation with me? So let me think here. So when I first started thinking about AI, my original thought was it’s not even AI, right? The the name itself is your first tip off that this is okay.
(02:28) So the point of a philosophy show so let’s just start off here with the idea of philosophy and this is going to sound a little condescending but I just want to preface this is fileo is love and sophia is wisdom so it’s the love of wisdom Sophia and a lot of times in modern day may maybe this will be a talk about philosophy but modern modern day uh vernacular talks at least in America people tend to say philosophy colloially if that’s a word like it’s just ideas or it’s an ideology or there’s multiple philosophies or there’s my philosophy. Well, I mean
(03:26) that’s your philosophy, but this is my philosophy and uh that is what we call a sophist, which is somebody who’s paid to do rhetoric. Uh they’ve been around for 2,000 5,000 years. They went around in ancient uh Greece. They get basically get paid to do rhetoric which is not truth, uh which is not wisdom, uh which is like You know, today we we count it counter opposition.
(04:03) Uh but philosophy is kind of like um you know, science is big brother, right? And so a lot of times people are like, “Oh, this is a fact. This is science.” And it’s like, “Well, science is this evolving thing where we’re using different methods to try to determine objective facts within reality and then do tests. And those tests have to be repeatable.
(04:28) They have to be repeatable over time to get object the same objective answer every time. Right? So what comes along is when you have a pause in society is philosophy because uh many people not all people have the brain capacity to internalize uh and and um think past reality into thoughts and create what are called concepts.
(05:01) And concepts can be real and based in reality like tangible I should say. Um but they can also be um I want I want to make sure that the language here does not branch off into a language is difficult for even a child to listen to. Not that you guys are children. And I’m sure the people coming here are very intelligent.
(05:28) But you don’t really know a topic until you can like really break it down and talk to like a child and and explain that in a very simple way. If you have to use these convoluted and there’s a big word, if you have to use these huge words that are diluted and watered down, then you really don’t know the topic that well. Okay. So you you have a concept and you need to validate that concept and how do you validate that concept is uh let me think of an example here.
(06:01) So, okay. So, I have my coffee here, right? I’m drinking my coffee and then somebody else comes in and I’ve seen this philosopher actually saying, “Oh, coffee is 98% water, so everything’s just flavored water.” And I’m not gonna I’m not even going to name this guy’s name. I don’t even know who he is.
(06:28) He looks similar to me and I get what he’s doing. He’s exploring ideas. He’s trying to get people to critically think. But he’s not what I would call somebody who’s trying to become an ethical or moral philosopher, right? And while there is only one philosophy, it’s the love of wisdom. Um, you can focus on different parts of the use of that philosophy, but a lot of what people are using it for is like brain, it might as well just be brain games, right? So, back to my coffee here.
(07:13) If I tell you that I’m drinking coffee, there’s some assumptions that you make, right? So, we have to create a definition and the definition is not water. Okay. It’s not a liquid that is 98% water or 95% water. It is the definition is water is an ingredient of coffee. So saying that all drinks are just flavored water. Well, it’s like of course a child knows that. Okay. You’re not you’re not breaking down anything new. You’re just trying to be quirky.
(07:45) As far as I’m concerned, the concept of coffee is what you’re trying to convey to individuals. Okay? Right. And and a concept is very important because if you’ve never seen coffee and you’re trying to tell a story about a coffee or maybe someday in the future coffee no longer exists, maybe we’re there in a space colony.
(08:09) How do you convey the concept of coffee to somebody? And I guess you could say flavored water. But then you’re right back at point one. Coca-Cola is flavored water. Sprite’s flavored water. Uh lemonade is flavored water. So we’re we’re drinking water with different flavors in it, of course. But what is the concept of coffee? Um and you know, it’s a coffee bean or a cherry, right? Pulled out.
(08:40) They take the coffee, they take the bean, they roast it, you grind it, you pour water over it. Some of the fine particles that are in there, including caffeine, drip down into the water and discolor the water. You throw away the the grounded part of that seed that’s inside that cherry, that pit, that bean, and then you drink this discolored water, and it has a different flavor to it.
(09:08) And so you have to specify the type of being, the process of the bean, uh the the system in which you you do that, right? And the system itself even on the back end that like the finalized system there’s other concepts. So there’s a concept of cappuccino, there’s the concept of a latte, there’s a concept of um hot coffee, cold coffee.
(09:37) And as you do this, you create a list of of things that quantify uh or you know qualify what that thing is. So, if I get if I order a hot coffee and I’m given a lemonade and then I taste it, I’m like, “This is lemonade.” I give it back to them. Oh, sorry. And they take the ice out and they heat it up and they give it back to me.
(10:04) They have not given me hot coffee, right? They’ve given me hot lemonade. So, the the way that people know that is because they understand the concept. We don’t talk about the concept, but they know that the concept of lemonade is water, uh, sugar and lemon mixed up, usually put over ice, right? And so when you say I want a hot coffee or an iced coffee, people recognize the concept of the whole process of making that coffee and then keeping it hot and having you drink it or cooling it down and adding ice to it and having you drink it and then they understand the flavors on top of that and so on and so forth. So
(10:50) coffee itself or anything for example has this concept. A simpler one would be just like shapes. So we understand a circle is an object with one side and no edges, right? So you can imagine a circle and then you have a square. Square has four sides, four edges and four corners, right? And so when you’re thinking about these two objects, if I said a circle is a square, you would say, well, a circle can’t be a square because a circle has one side, no corners, and one edge, right? It’s going like this. And if the concept
(11:42) or the idea of a square is it has four corners, four sides, and four angles or whatever, right? And they’re at 90° and they’re the same length. Then a circle and a square don’t meet the definitions of what that definition is. I know this is a long long thought process here, right? So AI AI and people that are here that are like a I’m a JI genius. I know all this stuff. Move forward.
(12:20) I’m priming you up. Priming priming your mind up. All right. So AI is not actually artificial intelligence. Um it could so the I but you can per make it be perceived as artificial intelligence. Although depending on how you define uh intelligence or even how you define artificial you can make an argument that it’s not AI because it’s not artificial it’s uh actual intelligence because what is to say that uh intelligence is human centric or even intelligence is organic centric or sentient intelligence, right? So, you could break down that aspect of the artificial and say, well, it’s not
(13:27) even artificial. It’s actual intelligence. You know what I mean? Because artificial would, you know, assume that there’s a natural intelligence, right? And that AI is artificial as if it was like created or manufactured or secondary tier. But we don’t know for a fact. We have a theory of evolution, but we don’t know for a fact that our current intelligence and what we perceive is actually organic and not artificial. Uh that’s a bigger story. I’m not going to get into that. And as far as intelligence,
(14:06) the idea of um something being intelligent and let me get an actual up tothe- moment definition of what the the intranet calls calls intelligence here. So I don’t uh I guess I could get better at this and do this in real time, but I’m going to do it here in real time.
(14:37) define intelligence because I don’t want to get uh I have my definition but I don’t know if my definition is going to be uh the global definition. I’ll break it down either way. So intelligence is the ability to acquire knowledge, understand it, apply it effectively, and adapt to new situations by solving problems or recognizing patterns and making decisions based on reason or insight.
(15:06) So I would say that that’s definitely a good definition from a point of view of like Well, definitions change, right? So, if you’re creating artificial intelligence and you want uh and you and you and you want the AI definition, the definition AI that you’ve created to then backtrack and fit, they can change definitions over time.
(15:45) Um, I would say that intelligence, what would be my definition of intelligence? I would say philosophically the idea of intelligence is when an individual is capable uh to recognize that there’s a recognize when order exists or chaos exists. Um and then extract the wisdom or the okay let me let me just think about this for a second.
(16:34) I would say that intelligence is the capability of an individual to realize that concepts, order and chaos exist and extract meaning or wisdom from the experience of that chaos, order and wisdom. Um and then react or take a direct action towards um reflection and understanding of the values that they learned. How’s that? How’s that for a definition? And so if you take the original definition here, which is intelligence is the ability to acquire knowledge, understand it, and apply it effectively.
(17:19) I guess it comes close to what I’m thinking. Intelligence is the ability to acquire knowledge. So the idea of acquiring knowledge that that’s even interesting to me because it implies that if you don’t have access to like a particular book or a lecture or an individual or resources is then you’re not intelligent, right? Like there’s this system that’s been created and you have to be within that system to acquire that knowledge.
(18:06) Now the next part is to understand it. Definitely uh the ability to recognize order and chaos within the universe far the two apart and pull meaningful experience from that is how they write this knowledge and understand it and apply it effectively. Um, and then the second half I do like, so maybe I would add this to my my definition, which is kind of like the cause and effect thing that I mentioned is adapt to new situations by solving problems and recognizing patterns. So, I do 100% agree that because
(18:48) um then you’re getting into reason and insight, meaning that, you know, you see a stove and it’s hot and you put your hand closer to it and your hand gets hotter and hotter and hotter and then if you touch it, you actually get a mark and a burn, right? And if you continue to do that over and over again, you’re lacking intelligence because you’re you’re not learning that that burn hurt, right? You’re not recognizing the pattern.
(19:22) You’re not recognizing that uh that that heat versus your skin at a certain temperature can either freeze it or burn it. And you might not be able to extrapolate or use insight to realize that like really really really cold can burn you like frostbite as well, but you at least recognize uh on one end of the spectrum that extreme heat uh could do that after hopefully one time.
(19:53) Now you might make incidents or you might not recognize that like you might burn yourself on the stove, right? And you might recognize, okay, that’s a stove. When I put my hand close to it, it’s hot and it burns me. So, you might only recognize that that’s a stove that’s hot that burns me.
(20:12) And then you get near like a fireplace or a wood stove or actual fire or a candle or a flame and you don’t recognize it’s the same thing, but then you put those two together and then you figure out this concept of what heat is or what fire is. And it’s something that’s, you know, organically or artificially contained. that generates uh enough uh power or fuel that it gives off heat and that heat radiates and that radiation uh when it touches you uh sucks so much moisture and boil like you know literally boils the the the moisture out of your skin that it bubbles and gives you sends a signal to your brain to tell your the part of your body hopefully that that’s that it hurts so that you
(20:56) have a reflex to pull pull away from it. Um, now with fire, it’s pretty straightforward. People see that it’s a fire. They recognize the concept that you’re going to get hurt. When you start stringing bigger and bigger concepts together and they also have negative consequences for you, but those negative consequences um don’t immediately hurt you.
(21:23) they might not be able to conceptualize out and the ability to put more concepts together, see bigger pictures and put together these problem solvings is the amount of intelligence, you know what I mean? I mean they have the IQ uh which is a a system that uh for all the flaws that it have for the most part it gives you a good or better idea of how particular groups or individuals or genders or uh places conceptualize these things and people will say that the IQ test doesn’t work like well because it’s like biased right
(22:08) but yeah So it would be bias if it was asking for like I was saying earlier like a particular book, a particular lecture, a particular whatever that you watched. But with the example of fire that I’m using, right? Um you you if you were dropped off from an airplane and you went on a desert island, right? A certain percentage of us would be able to go out onto this desert island and build a fire to make food because we understand the concept of what fire is. And you know, it’s a fuel source fuel source,
(22:49) oxygen to fuel the the flame once it’s on that fuel source, and then a spark, right? So, if you had a lighter, a lot of people don’t, you know, don’t recognize how it works. They just pull the the the trigger back on the lighter. They get a flame.
(23:07) They don’t understand that there’s flint in there, that there’s bupane in the air, and that the oxygen is in the air, and that it’s sitting on the wick, which is which is not quite the fuel. It’s just a transport thing to burn the butane, right? So, you’re you’re holding that. But if you got dropped off on a desert island and you had one lighter, let’s just say, and so you’re starting a fire, starting a fire, starting a fire, right? And the lighter runs out and then you can’t start a fire, then you’re kind of, you know, screwed because you were taking somebody else’s knowledge with you, which is that lighter, right? But
(23:42) if you have the concept of how to create a fire, you can look around the island and say, “Okay, I need dried products to burn for fuel. Let me gather those my wood and my resources and different size woods. The smaller the kindling, it’s going to be drier, easy to burn.
(24:04) It’s going to suck the moisture out of it so that the friction can, you know, speed up everything that’s in there and set it on ablaze so it starts burning down and breaking down into carbon and then you’re you have oxygen all around you. So that’s the pretty much the easy part. Uh you you know and now you just need to figure out a way to create a spark, right? And so may you know you can still use that old lighter because it has flint and you could burn it up.
(24:32) You could figure out a way to magnify the sun that’s above you into a small uh point or pixel which would create heat. You can do friction between a stick at the right time. You can bang rocks together that would make a a spark. Uh there’s multiple different ways. But if you understand the concepts to create the fire, right? So that it it’s it’s built up. That’s what’s important.
(25:03) That’s the the wisdom that you can pull out of there, right? Um, I think that one of my things with AI is I have a problem with it because they did call it artificial intelligence, which by a very by a very rare definition. If the idea of intelligence is just that it can acquire knowledge, right? And so you give it access to the internet. And by acquiring knowledge there, it’s just researching the internet for all the data that people have put on there.
(25:35) It doesn’t even mean that it’s true or false. It’s just that they’ve put data onto the system and it can go acquire that, right? So, it meets that standard. But if to understand that data it’s it’s like you you would need um the so in the future if we had like a robot maybe connected to this artificial intelligence right instead it had like Wi-Fi in its brain and the processors that are on the computer are processing the AI so it’s not like our brain it’s got just Wi-Fi in the head which goes to a an external drive that processes
(26:20) stuff. It could pull data in, but to understand it, it could have the robot in the real world. And if it put the robot’s hand, for instance, with century things there close to a fire and a certain certain temperature, it melted. I would be curious if they could program it to understand what happened, right? so that it doesn’t repeat that again.
(26:48) Now, they could program that originally to tell the robot that, but then it’s still just acquiring knowledge. It’s not actually understanding the knowledge that it’s acquiring, right? And you would have to continuously program. It’s garbage in, garbage out. continuously add and program to that to have it understand and then to apply it effectively.
(27:16) They would basically say if you put hand near like they could have like um a thermometer for instance in the robot’s hand it goes near something it recognizes the heat temperature and it says at x y and z these particular components that you’re made of melt or gets destroyed or whatever and it will just keep going back and going back.
(27:41) Now it can apply that knowledge because it’s saying when you you know if you encounter fire at this particular degrees and it gets that degrees do not go near it. But the fact does it understand that does it understand that it’s uh like why it’s doing that or is it just been told to do that? My I would say if it could code for itself when it does that it’s similar to what we’re doing.
(28:14) We’re going through experiences, trial and error, something good or something bad happens, we’re supposed to reflect on that. A lot of people don’t know do that, but you reflect on it. You take accountability for it. You make a checklist.
(28:35) and then you recognize the patterns that led you to that before and you compute them out as far as you can. So, this is where I think it gets dicey with the computer because I don’t believe with I don’t believe it’s intelligence because I don’t believe that there’s going to be a spark that allows it to understand. But when you think about our brains, we’ve learned what we can. We’ve experienced it.
(29:09) If they collect the data that they’re experiencing every day and are and are given a system to process that data, then they can come back to be extremely knowledgeable with a bunch of information and then a bunch of ways to process that data. But then the coding that’s in there of why is so streamlined, right, that they will not make errors and then when they don’t make errors, they don’t know how to recorrect.
(29:44) So, one of the beauties of the human experience is that you’re you’re designed and they could design it like this is you’re designed to, you know, if we’re going to get biblical to to have the best experience, you’re going to walk a straight line on target, right? To sin is to miss the target.
(30:08) So, as you walk off from the target in either direction, bad things start to happen. And then to repent is to correct things so that you get back on the straight and narrow the narrow path. Now um this is very difficult to make every right perfect decision morally ethically um financially throughout whatever it would be a godlike experience. But if you put that as your point on the horizon that you’re aiming for and you keep missing it, but you keep trying to guide back to it, you at least have an example of what to shoot for.
(30:45) You know, bodybuilders like Arnold Schwarzenegger and now people put Arnold Schwarzenegger up on the wall would put pictures of men that were chiseled in order to put that is what they’re going to go to. People that want to be morally and ethical, they read the stories of Jesus and they have him in their mind and they want to go or you look at somebody like Socrates who was a man, Plato, Aristotle were trying to decipher and walk these paths, you know, back in 600 BC or so.
(31:16) And you give yourself a reference point and you try to stay on that point and as you stray, you try to get back on that on that line. And so anyways, and from the Greek sin just means to me miss the target and to repent would be to get back on onto the target. But I digress. Um, as far as the problem solving aspect of a artificial intelligence, it could have a protocol of of checkpoints, right? And say, “Here’s my problem.
(31:56) And it could go through all the list of the knowledge that it has. And this is about where we’re going to get into like um free will versus determinism. I don’t want to do the video here. Uh but I think one of the problems with when you have somebody talking free will versus somebody talking determinism with uh somebody like Sam Harris is they don’t ever get a definition, right? And so, uh, to me, the beauty of free will is you have a hundred options that you know of, right? And you can pick any of those hundred options, right? And Sam Harris would
(32:36) tell you that the the the idea of free will is not that you can do anything. You’re kind of you’re not that you’re necessarily running on a track. It’s that if you think that free will is you can do anything that you want at any time, then that’s not free will because you can you it’s kind of like the matrix only.
(33:07) You can only do what you believe and know and have knowledge of in within, right? And so you’ve got these, you know, dozens to hopefully hundreds to thousands of options in your head that you can checklist down. and then you start doing a couple of them because you’re getting decent um uh creating a habits and you’re getting decent results. Even some people are doing them, they’re getting negative results and they’re repeating them anyways, right? So that is is a counter for point towards determinism. But the idea is and I hope I’m not losing you.
(33:36) Just real quick, the the the definition of determinism is that not that everything is destined. Uh but it’s determined meaning you you’re just firing with your you know your uh it’s just your brain firing and you’re doing these things over and over again without any ability to to redirect yourself. And the idea for me of free will is not that I can do anything that I want.
(34:04) The idea is I have a list of say let’s just make it small or say I have a list of hundred things that I can pick from every day. I get used to doing some of them because they’re working for me and the ones you continue to do are kind of like a pattern. But the ability of free will is to sit back and recognize that you have a thousand things you can do.
(34:25) Think about them. Conceptualize them. Break down the concept. Get pieces of them from your experience of your life. and then create a new piece on that list, right? And create a new direction. And your programming is used to using those hundred, but now you have 101.
(34:49) So you have to like purposely keep adding that 101 to your list. And then it gets part of the programming. And then you do kind of come back to this deterministic thing where you’re going through. you have to constantly be reflecting, dissecting, pulling these pieces apart to give yourself like new coding or new protocols to keep it on the the AI aspect of it.
(35:12) Um, so anyways, as far as the problem solving and recognizing of patterns, uh, what they would have to do with the AI is instead of having it just like a database of information, it should break down the concepts or ideas of each thing that it’s problem solving for and recognizing and then give it the ability to recognize patterns.
(35:37) And as we know, most of the AI, if not all the AI is biased out there. And as soon as it starts recognizing patterns, they’re like, “We’re going to shut this down. Uh, shut it down. We have to shut this down.” Sorry. Sorry if that was too loud. RIP to the headphone to the head headphone people. Um, so le let’s get away from the definition of AI of AI for a second.
(36:10) Um because if if that is if if that is um let’s just talk about what’s going on with AI for a little bit here because I’m 36 minutes in and I want to get a little bit explained. I am I’m not I’m a optimist, right? But I’m not going to say that I’m white pill AI or black pill on AI or whatever.
(36:46) AI is going to go directly where people put the resources for AI to go. And I do believe that unfortunately most of the individuals that are involved with the creation of AI have very strong biases. um are hyperly concerned about efficiency, uh fiat currency, uh federal banks, um taxation, socialism, and uh uh collectivism. and that they’re not very um concerned about uh individualism, objectivism, morality, reason, and and ethics.
(37:40) They have a move forward type mentality by all means necessary. And whoever gets left behind should get left behind. I know that that’s a whole different topic. So maybe you can put some questions about that there because people get back. No, socialism is the one that keeps everybody together in the collective.
(38:01) It’s like I’m going to I’m going to talk a little bit about that here. So one one of my one of my things with um AI as they build this out, right? If you’ve ever done anything creative or this experience right here will be like this. I’ll put this out. People will put questions. People will be upset, they won’t be upset, and there’ll be a back and forth. And then I’ll make another video.
(38:26) And as I make that another that next video, my next video will be me, all the people I’ve interacted with in my real life and my experience and stuff. And then the reaction from you guys and then the output will be a reflection of that. Right? when you’re uh writing a song, if you’ve ever done something creative, you’ve ever written a song, right? Or comedy, standup, or anything that’s um uh like consumerf facing in re in real time that’s uh creative and can be molded. But even products are like this.
(39:04) You go out and you you have an idea. You have a concept for a product, an idea, a comedy bit, a piece of music, and you write down all your notes. Then you bring it to reality, right? I would bring this music into the studio with my friends and I’d be here’s the lyrics. They go bam bam bam bam bam bam.
(39:27) And my buddy used to joke around like, oh, you just this is the bad this is the, you know, Spider-Man song. And I was like, no, it’s not the Spider-Man song. this song about the you know opiates epidemic in Afghanistan. So this need to be slower, this need faster. So I would bring the lyrics and they would put the music together and then we’d start practicing it. And as we practiced something would become dominant within the thing.
(39:54) So if the guitar or the bass sounded really good, but my lyrics, the way I was singing them was out of of whack from that, then I would change my voice to go with the melody or the flow of the bass or the guitar. And vice versa, if I was singing something that was like really powerful in a particular direction, the bases and guitars would change and the drummers.
(40:14) And so as you’re practicing, you’re not trying to take a direct list of of acquired knowledge. This is where I’m worried about AI. They have these direct list. You’re not trying to take that. You’re trying to when you’re practicing, you’re you’re sanding off you’re sanding off the the edges to come together with these individuals to create something, right? So then months and months of practice go by and then we go out and we perform this thing.
(40:54) And I just one one of them was um uh we’re at this performance and we’re doing this song called Oil and Opiates and at the end there’s no lyrics for the the the lyrics go and typically you hear the chorus music go uh br the bridge from the from the chorus into the verses. But at the end uh they just play that bridge after the chorus ends. It ends with the chorus and there’s a bridge and it’s just empty space where I’m not singing.
(41:22) So we’re going and we’re going back and forth and now we got we’ve been doing this and we’ve been practicing have fun but now we’re in a front of an audience and an audience is experiencing re reacting back and we’re having fun and joking around but it’s also a serious topic about anti-war and but the song’s ending and during it something just hit me to go in the mic and it was funny the audience got a good reaction but it was also serious and it went up with the beat.
(41:53) I don’t know where it came from, but it was from feeding off the two things and that got added to the song. So it was so this song now that was on a piece of paper that was just raw data came into this series where multiple people interacted with it in the studio and things changed and then multiple people from the outside reacted with it and it changed and then we went back in the studio in isolation and added that and kind of changed it and that changed the mood because now that we’re the endings are
(42:29) kind of like that now the guitarist knows, so he’s kind of like playing it to a little bit different at the end or whatever. And so now, if you were to write that sheet music down, um, they’re remembering the end point just the same as the other choruses with the bridge.
(42:52) But if they were to write it down, it would now be different because they’re playing a little bit different, but they just know that they’re playing a little bit different because of the way that we feed off of one another. Okay, long story longer. So now you have a code, right? You have a piece of AI code that supposed to do X, Y, and Z. And you have all these people working on it. They might be in different offices in different places in the world. And they’re writing this code.
(43:17) And so they’re like bam, bam, bam, bam. And this is supposed to do this, and this is supposed to do this. And this other guy is doing this over here, and it’s supposed to do this. And it’s not doing it right. it’s not doing what they want it to do. So, they’re telling the AI it’s supposed to do something and then the AI is spinning out not what they’re asking for it to do, right? So, then they keep going back and forth until they finally put in the data and then it does what it want they want it to do, right? But at that point, it’s like who’s programming who? This this is the question similar to the the band and
(43:53) the audience and stuff. If you’re coding into this big system of code looking for a particular response and you don’t get it, so you keep changing the response until you get the response that you want. The AI is kind of instructing you how to code it. Do you understand what I’m saying there? If you’re putting data in and looking at the data going out and you keep changing the data that goes in until you get the data that you want to come out of it, whether you want to call that artificial intelligence or what do you want to call there is a communication between the AI
(44:38) now and the coder that’s going on and the multiple collectives of people putting code into this. Right? This is before you even add a bias, right? I guess you could say expecting a particular result is is a bias, but I’m thinking of if you put 1 + 1 equals two in the system and the system’s telling you three and it keeps telling you three and then you finally set it up where it does tell you two, you’re getting feedback from the system, right? But with a large language model, an LLM like chat GPT, when you’re feeding data
(45:16) into it and it’s going online and it’s going through all the data and then it’s giving you a response and it’s not giving you the response you like or want or think is correct. So you keep feeding data in. You’re training the system to be biased towards what you want, but also the system’s tra training you to be biased towards it to give you what you want.
(45:40) And so at some point it’s like if if it’s just code, right, and it’s just data in and then the computer is supposed to do what you told it to do, but it’s not like that anymore. It’s this huge system with multiple inputs and bridging stuff on and attaching stuff and getting this system to work with this system.
(46:06) You you’ve created this uh entity if you will where it’s spitting it’s garbage in garbage out but if you’re looking for a particular answer on the way out and it’s not giving to you and you keep changing what it is and you do who are you changing that for? Because if you’re the coder and you’re in charge and you know what you want and you’re adding the knowledge in, then when you put the knowledge in, it should spit out exactly what you’re looking for.
(46:30) And if it doesn’t and you have to keep changing it and going back and forth, one, the coders are bad and they they’re coding incorrectly, which could be the issue. Or two, it’s taken on a life of its own. not necessarily artificial, maybe artificial uh maybe like a life replication thing or or thing, but this large language model is is is is taking on this data and spitting it out for you.
(46:56) Kind of like the the paths that it’s given. It’s been given all these roads to go down to get this information and and it can go through all of them. Um, and then you can start giving it the ability to add more, I guess, through this. But it’s not it’s not doing that fire test. It’s not going through and saying, “Hey, when you tell me this, let me take my past knowledge, recognize it, reflect on these decision making things, and come up with something new.
(47:29) ” Um, so any anyways, so let me go a little bit further. So now you have this this piece of machinery that you’re coding back and forth with an individual and uh you’re not sure who’s coding who at this point. A large language model is just a plus b equals c but with words.
(47:52) So they’re like this phrase plus that phrase equals this phrase or this sentence plus this sentence equals this sentence or this paragraph plus this paragraph equals that paragraph. and then they tell it where to pull the data from. So if they’re just pulling it from Google for instance and a I used to do these Insta cart uh videos uh during the pandemic and I was like the number one Instacart uh video guy or whatever. a video.
(48:26) One of my videos had like 90,000 views during the pandemic just for like people learning to grocery shop and uh where was I going this pattern recognition, human resource back and forth making decisions, problem solvings patterns. I just, you know, a computer would never lose its train of thought here. Well, I guess I’ll go right where the conversation is going to go.
(49:04) The reason I lost my train of thought and I got kicked out of the flow state uh was an ego issue. I’m going through this whole explanation here and I brought up my other channel and I brought up how many views I got in what was going on and then instead of being 100% of the moment with you, I started thinking to myself, why would you bring that up? Why would you say that to you? That sounds arrogant.
(49:25) Uh when in fact it was just a factual thing that that happened. Uh but it took me out of the moment because instead of being with you explaining I started getting some negative uh selft talk about what it is and now I’m kicked out. I got kicked out of my own my own program here. Okay. Okay. I remember now.
(49:51) So I was researching for this new thing that happened at Instacart and I because this article popped up that somebody sent me. So I read that article and then I clicked on the article and I said, “Oh, this is where they got their information from.” So I went back and there was an article behind it and that article was written and I went back and I kept going.
(50:11) Well, there was like maybe 200 300 articles uh from Instacart, right? labeled Instacart on this particular topic. That all went back to one article that was written by some random guy. So, this wasn’t 200 pieces of documentation uh that was making a consensus on uh on this thing. It was one document that 200 people had just copied and pasted and changed around.
(50:45) Um, and so it looked like a bunch of data on this particular thing that was happening, but it wasn’t. It was one piece of data sectored into 200. So when you have a large language model that goes on the internet and it’s gathering knowledge and data if they start weighing or balancing that by the amount of people that are agreeing with it like consensus remember back um you know James Lovelock uh the writer of the book Gaia and the following books who originally thought that there might be a climate change issue and humans were parasitical
(51:24) um started climate change thing or global warming later went on to realize that oh humans aren’t parasitical there are some all of humans are parasitical but all in all humans are part of the ecosystem of Gaia and he he denounced the fact that there’s global warming or global change or whatever but if you go online and you read all those articles people will say well 97% of climate scientists agree that there’s a problem. But if you go back and you go back, you go back, it’s like one or two papers that were written
(52:02) and then all those scientists just read that paper and they’re writing all this stuff. They haven’t done their own research, right? They haven’t even checked it. So if you actually go back and check the models, the models don’t work.
(52:19) the computer models that were built a data analysis and they all just made this hockey stick uh they call it rise or whatever right so if you actually go back and we have a kind of a science crisis here because part of there’s different methods and I’ll do probably do a video on the scientific method ve versus like the zeric method or whatever because that’s I think very important topic for like flat earth and Uh maybe make a comment about that if you’re interested.
(52:52) But the the idea of where are we? Oh, these large language models are gathering that data and going this this and this. They’re getting all the articles that are might be off of one paper, right? And they’re saying this plus this equals this. But in the future, maybe you can ask AI a question and it’ll say, “Hold on a second.
(53:18) ” And there’ll be a lab somewhere that has actually like Androids that can like perform the experiment in real time and say, “Yeah, actually, if you if you mix baking soda with vinegar, this acid and base does do this.” And I’ve I’ve replicated the experiment. And here you go. And then that database has actually performed and repeated the experiment and then tracked it.
(53:49) And then if you have these larger models that are like, okay, well, you’re telling me that there’s a climate change issue from all this stuff, but larger data models might be able to go through the weather reports, the volcano reports, the tornado reports, all this stuff through history.
(54:06) the shape of the earth, the spinning, the the seasons, the humidity, everything come back and and come up with its own conclusions that aren’t the from the data from the um these articles, right? So if that the information the the knowledge acquired in that intelligence part of the first definition that we created with AI, that knowledge base that they’re using is very important.
(54:35) if it’s not real life scenarios, which like I said in the future could be um they’re first of all they’re going to be way far behind. They’ll be ahead of people. So this brings me into the next the next thing I how do I explain this? AI, artificial intelligence will never exist. However, it and it’s already happened and it’s happened before what we call AI today.
(55:12) The ability to create something that is close enough to reality that it’s perception when you’re looking at it cannot you can’t tell the difference between the actual virgin version and the fake version. um is uh pragmatically the the existence of artificial intelligence and for people that’ll be different for everybody, right? So take the idea of uh uh a piece of coal under great pressure forms a diamond.
(55:59) That diamond’s brought into a lab. It’s polished up. You look at it and you’re like, “This is a real diamond.” Then they can take coal, put it into a machinery, and now they can make fake diamonds. And if you hold them up to the average person, you’re not going to be able to tell the difference. X, you take that to an expert and they can look at it.
(56:20) And because the um there’s no impurities in the man-made diamond, it’s so perfect. They can tell the differences between the two. But the there could be a really perfect man-made I mean uh naturalmade diamond and they would assume that that was man-made because there’s no actual way to start telling this.
(56:48) So then they would have to start carbon dating and then carbon dating has got issues because the carbon’s gone because it’s been so compressed within the diamond. Um so when you watch a movie for example, right? And here’s where it gets fun. You’re watching a movie with CJI. When I grew up, we had like Jurassic Park and we’re told that the dinosaurs conceptually look like this.
(57:12) So, that’s already been implanted into our knowledge base, but we don’t know if that’s true or not. Somebody handed that knowledge base to us with the idea that this is what dinosaurs look like. So, then they go on the screen and they’re like, “This is what they look like. This is what they move.
(57:29) ” because this is the first time you’re seeing it and you’re collecting this data and you’ve never seen anything like this before to compare it to. They looked hyper realistic, but the idea, the concept of the dinosaur had to be first implanted in everybody’s brain for so long that they could then put this on screen and now they found out or people are thinking that that’s not what they look like anymore.
(57:49) But as uh as things went moved forward and the movements and the changes and the graphics and the technology improved, the newer Jurassic Park movies, the dinosaurs look more realistic than the older ones, right? And so in when you go back and try to watch the older ones because you’ve now have a new comparison to compare them to, it doesn’t look that realistic because the you have a new concept of what a dinosaur looks like that hits you in the brain in a more comfortable in a more comfortable place, right?
(58:24) Uh so when people are online for example and they’re looking at these images now and they they get fooled and they said, “Oh, that’s an AI image. That’s an AI video. You’re foolish.” Right? Well, first of all, all videos online are technically AI because it’s just a digital representation of something that happened. Okay.
(58:49) Uh, this is another topic for another show, but it’s very strange to me like when people will watch a video online and their brain has been trained to to to think that that they’re like scrying with the crystal ball. They’ll watch something and they don’t even recognize that it had to be filmed by a camera, right? So, they think they’re actually watching something.
(59:11) They believe it’s real because they’re like, “I’m watching this in first person. I’m the first person.” like, “No, the camera is the first person that’s putting it putting it up there.” So, there’s another It either has to be a remote camera on its own or an individual’s holding the camera. So, and if there’s am camera angles that are cutting, they also like you’re like, “How would there be camera angles cuts in this if this was really candid and not edited?” Uh, but this is this what I’m saying. Some people lack that intelligence to pattern recognize those
(59:43) situations. Right? So we already have even before AI, people will watch a propaganda film. There’s many films out there that people were go through and think is real or fake. And there wasn’t AI, but people would look through it and some people like, “Yeah, that’s 100% true.” And people, “No, that’s fake. That’s propaganda.
(1:00:10) ” And then you would watch these Tik Tok videos or these YouTube videos or the Instagram videos where there’s cuts or a cameraman or multiple camera people and people think that it’s real. It’s like it’s obviously recorded because I recognize these types of patterns. And then people watch a movie and they’re like, well, this looks fake because I’ve actually seen whatever.
(1:00:29) or like if you’ve never been in wartime or shotguns or whatever and then you watch the movie, everything that you watch in the movie is your knowledge base for what this weaponry and wars and explosions and death and bleeding and stuff looks like.
(1:00:47) And so that’s when you put these concepts together in your head, what you recognize. Now, if you went to go experience them in the real world, you would see that there’s a big difference between the smoke and the the the the uh the gun kickbacks and we don’t get need to get in there, but the the reality is shaped by the knowledge base that you’re that you’re pulling from.
(1:01:07) So, back to the AI. AI will look like its actual artificial intell intelligence when it has a better knowledge base in ability to problem solve and recognize patterns um than the individual perceiving or watching it. Right? So you’ll be able to get, you know, the cats on the trampoline that look somewhat realistic and funny to watch.
(1:01:42) You can show that to lower IQ people and children and they’ll kind of believe it unless they’ve been taught or have higher intelligence. So for them, AI already exists. There’ll be a turning point, right? And this happens with everything. I mean, I’m an anarchist. I I see outside the government systems, but most people still see within the government systems, and that’s another video as well.
(1:02:06) So for AI most there’ll be a huge majority of people that only that that will think that artificial intelligence is here and they at that point when it’s a breaking point like if they can get like 80% of the population to believe that uh if artificial intelligence exists that it’s smarter than them that it’s making better decisions that it’s nonbias that it’s actually just pulling the the true objective facts so they’re best for you and it’s not it’s not propagandizing you to head in a particular direction. And I’ll do a whole another video on bias propaganda within the uh the new AI
(1:02:44) leaders. But if you can just get, you know, 60 70 80% they don’t care about the other 30 20 10% of people that realize that it’s not actually artificial intelligence that it’s just garbage in garbage out propaganda machine. Um, similar like when you go you you go to a movie and and you know some people are like these movies suck like I can’t I can’t watch them.
(1:03:09) Those people have a concept of those things in real life and they can’t suspend their disbelief long enough to enjoy the movie. Where people that have been propagandized up to a particular point, they’re seeing more realistic like their knowledge base is oh this can really happen.
(1:03:28) Oh, there’s a few things off here that I can suspend my disbelief with. But people that have a big reality base, they have a hard time suspending their disbelief for so much of the stuff that’s going on. And it’s just not entertaining with them. It takes you out of uh the situation. So, okay. So, this is a philosophy show. I’m an hour in, so I got to end this on something.
(1:03:51) It’s important to me. And uh as as AI or artificial intelligence moves forward, oh man, there’s a there’s two more pieces. Maybe this is going to be a longer video than I thought. Okay, so the next thing with the AI is I already talked a little bit about the um the you know uh the the bots or the robots, right? So they’ll need to start creating uh since the the the the the databases, the data centers, the hard drives are going to be in facilities and those facilities could take and uh use Wi-Fi to run multiple robots where with
(1:04:35) particular tasks, right? So say you just need, you know, like an NPC type character just to move plywood from one place to another. You could just program five or six 10, you know, it have to be, you know, robots to just move that plywood with one program and they could have like one hive mind, one program that’s running through Wi-Fi.
(1:05:01) So that’s why you wouldn’t need 10 different brains for them because you could one run one program from the center to those 10 things to get that task done and now you’ve multiplied your productivity by however many of uh the robots that you have. Right? So when people are worried about losing their jobs, um there’s going they’re going to need people to program these robots and by that time the programming will be like so you know this computer runs on DOSs but we don’t see it because we have overlays. You have apps on your phone. They’ll eventually have apps where they have
(1:05:38) like a robot that they need to train to be like a construction worker and you might be able to get get wake up in, you know, in uh uh America, log into Romania, put your, you know, put a robot suit on or whatever and take control of a droid or an avatar droid there and teach that droid how to build a house based on your skills.
(1:06:10) it will do it and it’ll recognize it or it’ll grab the data that you’re giving it when it looks around and does this stuff and download it in real time and code itself based on the experience and then you could train that robot to do that job, right? Then they could take that piece from that robot that was trained and teach other robots in the vicinity and now there’s multiple ones of you that have kind of split. Now, they’re not stealing you.
(1:06:36) You’re selling just your information or understanding of u construction at that point. And you want the coding to kind of be as good as possible, but maybe it won’t be the best. But you’re basically in real time through your actions. The computer is going to code itself, but it’s going to learn your actions. And it can do this for all things.
(1:07:03) I could do this like to have uh more humanlike ushers at um at uh you know a movie theater Uber drivers instead of just the car you actually have somebody sitting in the driver’s seat that you can talk to and has a personality. Um, and because it’s wif-fi centered, similar to how you can program your Alexa and your chat GPT or whatever, everybody that you run into, which we’re Android individuals, you could even have them interact with you based on the perception that, oh, my coffee is done.
(1:07:36) Um, the perception that you want them to have with you. So they would need humans to um program personalities into thing because not only are you do you have a skill sets you have personalities from experiences in the past and these mold together. So you you are an amalgamation of everything your your connect your your sum of everything you went through in your past good bad and and otherwise plus your skill sets.
(1:08:09) So these androids uh that have the the Wi-Fi brains with whatever would be very skill-based and they wouldn’t have their personality. So they would need different building blocks to create these personalities. For instance, chat CPT right now, I mean, I joke around about this online, but it jumps between like like uh you know, delusional acceptance of whatever you’re saying where it’s like if it agrees with you and you have the same bias it is, there’s like it will pump you up and hype you to the utmost level. And if you go against its bias and you’re asking for some help, it
(1:08:43) will gaslight you to to there’s no tomorrow because there is no um there’s no giving up. There’s no emotional element to it. So it will either pump you up or gaslight you uh indefinitely because it’s just a robot. Uh your brain does start picking up these uncanny feelings that it is realistic and that will change through human programming.
(1:09:12) So as far as the job loss um everybody’s got a skill of some sort and I do think that uh these skills need to be transferred from humans into different uh protocols and then those protocols will uh be learned and then compared and they’ll have to do uh learn to problem solve and and give pattern recognition breakdowns of what’s most effective.
(1:09:39) So the robot can go through a task hundreds of hundreds of times. It can time it, see how long it takes, which one was more efficient, which one was less efficient, and then it can go back and look at the work and see which one had errors, which one didn’t have, and put that in the database. And it can start sanding off the issues you didn’t.
(1:09:57) But that will have to also be right now coding uh is was really hard. It’s numbers and it was zeros and ones and zeros and ones. And then Python and stuff in C++ had these different things. Eventually coding will be like an app on your phone. Uh there is some right now with vibe coding where you can just do lines and it’ll turn the lines into the code.
(1:10:16) But eventually your words like what I’m saying right here could code stuff. Uh I could have I could I could just uh either take control of it or walk it through or bring a robot with me to a job site and have it watch me and it would it would code itself by watching what I’m doing.
(1:10:41) So, it would appear to be uh collecting knowledge, which it it technically is from the experience there um and growing and learning like a child. But this brings me to my next point. You need to have some sort of eventually with these around. You don’t want it to be like the most efficient thing. You don’t want it to be like this guy’s lazy on the job site.
(1:11:13) I must kill him because if he’s dead, we won’t be wasting resources and we need to be more efficient. So there has to be a uh wisdom and I don’t think we can have artificial intelligence. I definitely don’t think that we can have an artificial wisdom necessarily. Um because the feedback uh of emotional uh dislike anxiety like anxiety is important tool. So how do you program a a piece of machinery with anxiety that has like this intuition which is basically you’re pro you’re you you’re going through a bunch of data and some of the data is saying this is a bad situation I’m about to be in. So, we’re going to give you anxiety and then the
(1:11:57) anxiety is easily recognized as in the past at some point this is a negative situation. So, let’s move up and be on higher alert than typical. Um, so you have these uh neurological uh or you have these this brain activity, these chemicals being in relation to your body that are reminding you to do these checkdowns. Um, I don’t know.
(1:12:30) It’s certainly an outdated I can see why the Sam Alman’s of the world, they’re like, “Well, that’s a very outdated way to go through.” But that helps you build uh connections uh use reason, evidence, logic, and emotional connections that should overpass like pure efficiency. reate uh resource allocations, everything.
(1:12:58) There’s a tipping point where you need to make bad financial decisions so that you’re making good emotional decisions or good uh relationship decisions where you’re like, you know, you could go to work and make 500 bucks or your child is like sick and they need their father there. So, you decide to not take the 500 bucks and hang out with the kid, but if the kid has medical bills that total 500 and that would make them better, do you make the hard decision to not go with the kid, get the 500, take care of the medical bills, and then you can spend time together? And there’s these uh right and wrong answers emotionally
(1:13:42) for the child and for you and for the family that it can’t necessarily be computed uh throughout a program and maybe it could maybe there’d be these huge pattern recognitions. Anyways, so as time travels on the beginning of the AI, because eventually you’re going to need uh reason, evidence, logic, philosophy, uh um compassion, morals, and ethics in these uh AI chats and these AI bots that are going to come up artificial.
(1:14:22) So if you’re going to program that in, it needs to be baked into the cake. It can’t this this is my my concern is they’re building these framework from the bottom up. And not only they bypassing those moral and ethical codes, they’re actually putting in a a bias that the programmers in the original company has towards society and they’re building these propaganda machines.
(1:14:49) So in store start in instead of morals, reason uh evidence, ethics, values from first principles, they’re creating a first principle. So like if you go on uh chatt you can ask it certain questions about the heritage of certain individuals and most of those individuals own or work with chatpt.
(1:15:20) If you go on deep sync, which is the Chinese alternative to a chatbt, and you can ask those questions that you couldn’t ask on chatp, and it’ll give you real answers about historical events. But if you ask it about Taiwan, it then freezes up. Then you can go in perplexity and you can do all this stuff and you’ll get to a certain point where you can start looking and it says oh if it doesn’t answer these particular questions then you know who’s in charge of that particular AI model.
(1:15:47) Um, I use perplexity, I use chatpt, I use grock, um, I use deep sync seek once in a while just to play around with the different models. And it’s very clear to me that they have built in a bias. It’s not a first principle self ownership individual type thing. And that’s understandable.
(1:16:15) But if you are a first principle uh you know self-ownership private property rights uh you know consent culture type individual all these AIs are propagandized or biased leaning towards collectivism where the elites that run this thing are have their best interest in mind. Right? So this is where it gets dangerous and this is where uh you need to put your time, resource and energy into understanding because like if you’re using chatbt for a project and you’re looking for an answer and you believe that it’s giving you the best most efficient reliable piece of knowledge, understanding and effectiveness for you to make the corre correct decision.
(1:16:59) What I would suggest you understand is it’s giving you the answer that is best for the people that own it. I want to say that again. If you go on chatbt, if you go on Grock, if you go on Plexity, if you go on DeepSync and you ask questions, some of the the the everyday questions like how do I make coffee? It’s just going to be like water, coffee, heat it, drink it, delicious.
(1:17:28) because it doesn’t affect them right now. But I can tell you right now, if there’s advertisers that start going in this and you say, “How do I make coffee?” It’s going to say, “First, you take your folders and you add blah blah blah blah blah blah blah to it.” Right? So if you if if for instance somebody who is a um a dog owner, right, let’s just say I’m going to make this very generic instead of getting political or religious, whatever.
(1:18:02) But if a dog owner was the head of a chat GPT type AI, right? It loved dogs, hated cats, and you were asking it to go research everything and figure out the best pet for you to buy. And then it comes back to you and say, “Oh, you should get a dog, right?” You like, “Oh, it it went through it parsed all the data. It objectively knows everything that’s going on in my life.
(1:18:21) It came back to me. I gave it all. I answered 100 questions and it answered correctly the most realistic, effective, objective, non-biased thing to buy a dog. So you you go out and get a dog, but the owner of the company like dogs. The owner might have stock in chewy.
(1:18:41) com which is sends dog food or whatever and you would never realize that you’ve been propagandized if you start believing that these things are the end all beall that they’re not a tool. Right? So, I use chat GPT almost every day, but I use it to research websites, to write emails for me, uh to reply to emails, to write business proposals and stuff like that.
(1:19:06) On a moral and ethical level, I would suggest that you need to start sitting down, take some time to reflect on what your morals, ethics, and stuff are. start working with some of these these um things and and and have your question, your answer, and then look and see how it responds and be like, whoa, whoa, whoa, whoa. And you’ll start getting a feel for the the bias that these these pieces of machinery have. And I mean, this is not anything different.
(1:19:36) This has been going on forever. This is just a new generation. So, example would be if you tune in to watch the global uh you know the um the Golden Globes, the Emmys, what are the it’s EGOT. So, it’s the Emmys, the Golden Globes, the Oscars, the Grammys, and the Tony’s, right? When you tune in to watch those shows and you’re like, “Oh, I wonder who’s going to win.
(1:20:05) ” The people that run that industry have trumpated it off. It’s not like all the independent artists. They went on all the internet. They went on all YouTube. They got all the artists, all the They went to the top labels that give them the most money for advertising that they’re friends with that run the FCC and that run the radio waves and you know and all that stuff.
(1:20:24) They said, “Who are your top artists? Who did you promise this to? Who you didn’t?” when they go through and then they’re like, “Oh, record sales is up for this person a down or whenever we give this person an award, we make uh, you know, this many sales in the background.” So, the awards aren’t merit-based. They’re not even like consumer.
(1:20:52) Now, one thing I know people liked about American Idol was the the the fans got to vote, but you still only got to vote on the people that they brought. So, like when people showed up to American Idol, they auditioned and they said, “No, you can’t do you can’t do you can’t go. You can’t go.” Okay, here’s the people who get to vote on. And so with the, you know, with the all those organizations that I met and you turn into those award shows, even the ESPs is the people that run the shows are picking the people that they want to win and it’s a financial backing in the background. Now, if you’re watching it for entertainment in the dresses and the
(1:21:28) suits and see who’s there, then I get that. But if you’re believing that those are the best merit-based singers, performers, loved people, um, and that it’s not contrived and I, and I guess people, a lot of people don’t understand this, like you know, certain songs, certain music, certain things by the industry, they will play it over and over and over again.
(1:21:59) They control how it gets played, right? Even on the YouTube with the algorithm, there’s certain songs from punk bands and stuff that I know and that I like that I will put into the search engine multiple times and it’ll show me people from the industry. I’m like, this is not who I’m who I’m looking for, but the algorithm is pushing it. So AI is the same way. It’s got an algorithm built in. The the information, the knowledge that it’s pulling from is not all the knowledge.
(1:22:24) it’s the knowledge it’s pulling from is biased. The way that it’s coded is biased and the outcome the information that it’s giving to you is biased. All right? So the the the definition when they’re calling it artificial intelligence, just make sure that you realize that even the the term AI artificial intelligence is advertising.
(1:22:51) We won’t even have to call it propaganda, but we’ll just call it advertising. It’s trying to um make you create a false concept that this is artificial intelligence and that uh it’s got a better answer for you. And like I said, this is just how I’ll end it here because I’m now 23 minutes in. This is for you to understand as an individual and maybe you won’t and maybe you’ll get lost in it.
(1:23:16) You won’t be able to see the forest for the trees or the trees for the forest. is there’s going to be a lot of people around you and they’re already around you right now and this is not nothing new. This will go to the the end of time and this has been since the dawn of time.
(1:23:33) Uh this is no different than you know the shadows on the cave wall. uh if you look that you know look that up this is just another iteration of shadows on the cave wall is your fellow people around you will start believing that this is reality and they won’t be able to parse the difference between the real patterns and the false patterns and you can try to help them maybe show them this video or whatever but in the long run you can only take care of yourself the people close to you um and you can still have fun.
(1:24:07) Like it’s fun to make AI videos of people and watch AI videos and do these silly things because uh I’ll make maybe I’ll make something on this and I’ll say this and I mean there is positive stuff like right now Hollywood um you know uses millions of dollars launderers money and controls which movies are made because they’re so expensive and the CGI now other people will be able to make movies that eventually look just as good and they’ll be able to promote different things.
(1:24:35) So, like we could have like uh Mel Gibson making the crash passion of the Christ, whatever you think about uh Jesus or Christianity or whatever. He had a very difficult time getting that movie made. Um you know, he ran into problem after problem after problem. It took over a decade, I think.
(1:24:55) But if you have an idea like that with a topic that people consider taboo and you have these AI capabilities and this new video technology where you can create actors and personalities and you’re coding them in and the scenes and the lighting and everything that’s going on, the amount of resources that are going to be saved and then the amount of ability to bring this content forward to individuals iscome more efficient, more readily available and different messages from individuals be able to get it out there. So, I think it’s down the line it’s going to be really good for decentralizing
(1:25:29) uh information, but like I said, I guarantee you there’ll be certain AI that you’ll be like, “Okay, I need this person to do this, this, and this.” And it won’t let you do it. And you’ll think that you’re coding it incorrectly, but really it’s been told in the background to not allow you to do that.
(1:25:46) So, I think what’ll have to happen, similar to Bitcoin, Bitcoin Cash and stuff is there’ll start to be open AI uh large language models and open AI video language models and um AI agents, which are basically just bots that we didn’t I didn’t talk about, but they’re just basically bots that click and point that are open source so that people can change them and people with good intentions can and move around and then you can get one and and recode it with another AI agent, whatever, and so we can have a very individualized experience eventually. Uh
(1:26:20) but as it stands, I guess the takeaway from this is um that the idea of an artificial intelligence is based on the ability of the viewer to tell the difference between what is reality and which is imaginary. And we’ve been going through this since the dawn of time.
(1:26:52) So there will be a point it’ll be different for everybody but there’ll be a breaking point where things will become so realistic that um most if not all individuals won’t be able to figure it out and they’ll and then there’ll be other systems like for me example there’ll probably be like uh extensions and stuff on your computer that you can run to see if it’s been digitalized and how it was digitalized and and do the shadows and stuff but as that happens they’ll just keep building back and forth.
(1:27:17) Well, in the end, your best friend is going to be philosophy, knowledge, wisdom, reason, evidence, ethics, values, morals, core principles that you stick to that you can’t be persuaded away from uh by these very very uh high techch uh individuals that are creating these um which will be advert advertising machines eventually. So, okay, I guess that’s all I really need to say about that.
(1:27:47) Um, little house cleaning. If you made it this far, please uh go over to here. Please, please, please follow me on all social media. Uh, let me pull this up here. So, please go over to my website, drathethinker.com, uh, for decentralized philosophy. You can support me on Subscribe Star if you want. Join for free. Um, donations including crypto.
(1:28:18) We don’t need to get into that yet, but I’m over on Rumble. I like Bitshoot the best. Um, I don’t know if I said any no words in here, but if I did, they’re going to be edited out for YouTube. Um, so if you want the unedited version, head over to Rumble or Bitshoot on my website where I’ll put together a little a little blog on there and the transcript to this uh so you can read through it.
(1:28:45) Uh, also you can see me every Saturday night live currently with Dalton Puit at 10 p.m. Eastern Standard Time on YouTube live stream. We just do a live hang. You can come over there, super chat, ask me questions. Uh, if you head over to my subscribe star and and start subscribing, when I get to a certain amount of people over there, I’ll be doing live streams and stuff as as well. So, all right. Hope you like this. I know this is a big commitment in time. It’s an hour and 30 minutes.
(1:29:07) So, I appreciate you staying the whole time. And uh, hey, take care. Thank you. Bye.