Jim Rickards on MoneyGPT: AI and the Threat to the Global Economy

November 13, 2024 – What scenarios might unfold in the coming months and years as highly advanced, autonomous models like OpenAI’s o1 are integrated into capital markets—and potentially exploited or weaponized within the global economy? That’s exactly what we’ll explore today with bestselling author Jim Rickards, who has just released his latest book, MoneyGPT: AI and the Threat to the Global Economy. Jim is the editor of the Strategic Intelligence Newsletter and the author of numerous must-read books. He’s also an investment advisor, lawyer, inventor, and economist. In this special edition of the Financial Sense Newshour, Jim delves into the opportunities and risks posed by AI’s growing role in trading strategies, risk management, and decision-making within the capital markets. You definitely don’t want to miss this interview!

To speak with any of our advisors or wealth managers, feel free to Contact Us online or give us a call at (888) 486-3939.

Stay ahead of the news! Subscribe to our premium weekday podcast

Transcript:

Cris Sheridan:
What sort of scenarios could we see in the months and years ahead as highly advanced AI models like OpenAI's new Zero1 model are applied to the capital markets and possibly exploited or weaponized? That's the subject of a fascinating new book which we are going to discuss today. Joining us on the show is Jim Rickards. He's the editor of the Strategic Intelligence Newsletter, a bestselling author of numerous must-read books. He's also an investment advisor, lawyer, inventor and economist. We're going to discuss his latest book. It just came out November 12, 2024. It's titled Money GPT, AI and the Threat to the Global Economy. Jim, thank you for joining us today.

Jim Rickards:
It's great to be with you, Chris.

Cris Sheridan:
So, Jim, I've been following the AI debates for many decades. I know that you have as well. And you know, the philosophers have been talking about the existential risks of AI for a very long time. But it's interesting because there's been, I would say almost a complete blind spot or vacuum of analysis when it comes to how AI can affect us in the financial markets, in the global economy. And that's where your book fits into this very important discussion. Tell us about it.

Jim Rickards:
Thank you, Chris. That's right. First of all, this, just to be clear, the book is not bashing AI. AI is huge. It's going to grow bigger. It's already here. It's not a book. Oh, just here's what's going to happen a year from now. This is happening right now. It's all around us. I say to people, you know, when you open your refrigerator and there's a message that change, please change your water filter. That's AI. You know, it's monitoring temperature and water filter and it's telling you what to do. So there's AIs in your refrigerators, in the dashboard of your car. It's everywhere. So, so it's all around us. You're absolutely right. And it will do, it's very powerful and it will do a lot of good. For example, they're finding that, you know, when you, when you're trying to research medicine or build drugs, basically it's a combination of molecules and how do they affect certain diseases and treatments, et cetera. And humans can do that and they can be very good at it. But a computer can do it in much, much less time with a much larger database and we're talking exponentially larger and actually fine cures for disease that humans could never do, not because they're not smart enough, but because there aren't enough of them, you can't get through that many combinations as fast as the computer. So there's an example, some good, there's already coming out of it, some medicines that result from that. But what I do, I stay in really. My areas of expertise have two lanes. One is capital markets and banking, the other one is national security and intelligence. So I look at AI in those two worlds. So there's a nod to biochemistry and engineering and all kinds of other applications and there's a lot of good stuff there. But I really want to look at those two areas. And as you mentioned, I always say when it comes to your own money, Everybody has a PhD. That's a subject of interest to everyone. And the book is not particularly about what's going on in the stock market. We all know what's going on in the stock market. Nvidia, AMD, Intel, Microsoft, Google, Facebook, now Meta. I mentioned Apple and a few others. Those stocks are going to the moon. Whether it's the Magnificent Seven or the Fabulous five or whatever you want to call it, is driving the entire stock market higher. You can get into a debate whether that's a bubble or not. I happen to think it is a bubble. I understand there's a contrary, contrary view, but that's not what the book's about. You know, talk to your financial analyst or come up with your own views on that. This book is about the technology itself that's embedded in a lot of trading decisions and decision making and how it can cause chaos, particularly when it interacts with human nature. A couple of points about AI first of all, you know, it stands for artificial intelligence. It's not intelligent. There's, there's, there's nothing like a human brain inside any of it. It's mat. It's powerful, it does stuff. I'm not, not denying that, but it's, it's just all math. There's no actual human brain and digital form there. It's just, that's not what this is. So it's not that intelligent, but it is fast and powerful and can do a lot of things. The other point that I make is human nature hasn't changed in 50,000 years. I mean, culture changes, society changes, but we're kind of not that, not that far remove, you know, Neolithic times in terms of, you know, biases and emotional reactions. I can point to banking or financial panics in the mid 14th century. You know, the houses of Barty and Perusi collapsed in, in Florence in, in the 1350s. So that's not new. What is new is the combination of the two, the power of AI and its functions with human nature, which isn't going to change, and how they interact and actually create a feedback loop that's very damaging. So let me, Let me be specific. That sounds kind of theoretical, and maybe it starts out that way, but I'll give you a specific example. There's something called the fallacy of composition. That's a fancy name, but what it means is that things that work for the. For an individual, things that might be really good decisions for an individual can be catastrophic when done at large scale. When everybody does the same thing that, that the thing falls down when it gets to a larger composition. So you're at a football game and you don't have a very good view. The guy in front of you, he's got a big hat or he's tall or whatever. You can't see the game too well. So you stand up like, hey, that. Now I can see the game. I can see the whole field is beautiful. I love it. I got a great view. But what happens next? The person behind you stands up, and then the person behind her stands up. And the next thing you know, the whole stadium's on their feet. Nobody's better off because you got the same view you had when you were sitting. And everybody's worse off because you're all standing up. So there's an example where a smart move for the individual turns out to have very adverse consequences for the entire population involved. Or stadium full of people. Now let's apply that to the stock market. So let's say you're having a market crash and you know, and your. Your audience knows they happen with some frequency. In March 2020, the stock market fell 30% in one month. It wasn't the biggest crash in history, but it was the fastest crash of that magnitude in history to fall that much. And really a matter of weeks, not six months or a year, etc. So what do you do in a situation like that? Well, there's a. Almost a natural. Some people just kind of let it happen, or some people wake up too late. But what a smart strategy. They say making money is great, but you have to avoid losing money. Because losing money just kind of. It's like a quarterback taking a 20 or loss. You got to pick up from a lower base. So what you can do is sell all your stocks, go to cash, move to the sidelines, wait out the crash when it stabilizes, or you see signs of a revival, tiptoe back in and buy stocks at a lower level and ride the next wave back up. That's a perfectly good strategy for an individual. But what if everyone did the same thing? What if everybody sold pretty quickly? You get all sellers, no buyers. The markets are not just crashing, they're going through the floor. You're blowing through circuit breakers. Short timeout. Blow through another circuit breaker, timeout. Third time they shut the markets and they might not actually be able to reopen them that day. And if that sounds far fetched, I remind people the New York Stock Exchange was closed for five months from August to December 1914 at the outbreak of World War I. It was closed for four days after 9, 11. So those kind of closures do happen, those kind of panics do happen. And so everyone's kind of doing the same thing. But now we've harnessed it to AI. So the developers and the engineers who are not necessarily stock, maybe they consult, I mean the financial side and the engineering side talk to each other. But basically the engineers learn these lessons. And by the way, we're talking about large language models and deep learning. How does a computer with an AI algorithm learn what to do? This is where the GPT comes in. Generative pre trained transformers, which basically gives you output. You prompt it and answer your question, write an essay or whatever. By the way, I did write my book, it wasn't written by GPT. But this is where these algorithms are all the same. They're not tailored, they're not individual particularly. So now, particularly, you get into a situation where the market's crashing. But we've now delegated our decision making to AI. We've told AI how you decide what to do or when to get out and all that. So. But what does AI know? It knows what I just described, which is get out first, don't be the last one in line, don't be the last one to go down to the bank and try to get your money. The doors will be shut. The first person can get out, the last person doesn't. So be the first one. So they all start selling at once. And now again, exactly the phenomena I described, the markets going through the floor and getting ready to close. So there's nothing new about, you know, that kind of computing power has been around for a while though. It's getting faster and the models are getting deeper, more layers. The human reaction that I described, it's just part of human nature has been around. But when you combine them and you outsource the human element to the computer and the computer only knows what it knows, it knows sell Everything go to cash, go to the sidelines. There's nobody sitting there. Chris, as you know, in the, probably not past the 90s, certainly. Well, going back to the beginning of the New York Stock Exchange through the 1990s, they had somebody on the floor of the exchange called a specialist. And the specialist had a privilege and a duty. The privilege was you got to be the exclusive market maker in a stock. You were the specialist in American Airlines or IBM or whatever it might be. And you got to see the back of the book. Not just the best bid offer, but all the other bids and offers at different levels. So that was very valuable information. Your duty was to keep the market in equilibrium. So if you had tons of sellers, you had to be a buyer and try to, you know, break the panic a little bit. If there were tons of buyers, if it was a buying frenzy, you might be a seller and kind of damp down the enthusiasm a little bit. But take the other side and be a market maker that is long gone. I have a good friend, he's the director of floor operations in the New York Stock Exchange. I was down on the floor with him not long ago and they looked at and said, Jim, don't let anyone tell you there's any liquidity here. There isn't. So we, you know, we do our jobs. We may be market makers, but we're not standing up to anything because we're dealing with computers and automated orders and everything we just talked about. So the point being, yeah, stock markets are hitting all time highs being driven by AI stocks in particular. I get that. But the danger is that in these algorithms in the AI I described is this default position, which is to mimic humans, do what humans would do. But there's no, there's no common sense. There's nobody, you know, a human might say, you know, the stock market's going down pretty far, pretty fast. Maybe it's time to get in. Maybe I can buy the dips, maybe I can pick up some bargains here. The computers don't do that. That's common sense and nuance that some humans have, but computers do not. It's almost non programmable. And so we basically lay this trap for ourselves. And I have a, I have a scenario in the. It's in chapter one, so it's nonfiction, but I write a scenario of a market collapse to really illustrate what we're talking about right now in a much more accessible way. So I have some characters. One guy's a wealth manager for a family office. He's wealthy, manages his own money there's another team. They're hedge fund traders, but they're master manipulators. They just want to sink the markets and go short. Sink the markets, take their profits and run. Another group is a Chinese cyber warfare unit that can hack the order entry system of a major broker dealer. I have other characters, you know, Fed, Reserve, Federal Reserve chairman and reporters and all that. So, so I go through this scenario. But here's the point. There's no mastermind. The crooks in Mallorca didn't know about the Chinese cyber warfare unit. The Chinese didn't know about the private office in New York. That guy didn't know what the Fed was going to do. There are, there are deep fakes for. I'm sure most of the viewers know what deep fakes are. But if I g you a couple thousand hours of videotape of J. Powell giving speeches, which you can easily find, and backdrops, you know, Economic Club of New York or some boardroom with the Fed or whatever it is, and J. Pal's voice. If I gave you all that in digital form, there's software you can, you can rent or hire a consultant or buy it off the shelf where you can create a deep fake synthetic version of J. PO giving a speech saying whatever you want. So right now we're, we're in the middle of the Fed easing cycle. You know, the timing and amount is, you know, for debate, but the Fed is, is cutting interest rates. But you could create a speech where Jay Powell says, you know, we, we got, we were wrong about inflation. Inflation is not going away. It's still here. We're going to hit the pause button on rate cuts. We might even raise rates. Well, you and I both know what the market would do if that speech came out today. Stocks would crash, bonds would crash, you know, dollar might get stronger, et cetera, but, and pal's not going to do that, just to be clear. But we could create a digital fake Pal that you couldn't tell wasn't real. His mother couldn't tell it was wasn't real, and put that out there and start a market panic. So that's another element that I include in this book. So all, all I'm really saying is, a, it's already here. B, these capabilities exist. There are bad actors in the world. There are good people, but there are a lot of bad people. And something like the Chinese cyber warfare unit, they're actually not out to make money. They're out to destroy the wealth of the United States. When you think about what a war Is what do you do in a war while you, you know, you send bombers and missiles and drones and commandos and you blow up stuff for the other side. You destroy. What is it you're destroying? You're destroying economic infrastructure, railroads, airports, you know, security bases, oil pipelines, electric grid, etc. You're destroying and degrading the economic capacity of your enemy so you can win the war. What if you could do that on a computer? You could skip all the bombs and everything else I described, penetrate the order entry system, create some deep fakes, start a selling panic that would then feed on itself and then the AI kicks in. So it's an AI attack using deep fakes, but then it's an AI response saying hey, sell everything, go to cash, move to the sidelines and the markets are closed. So it's not, I'm not saying it's going to happen tomorrow, I am saying it could happen tomorrow. It's just a matter of time probably and people need to be ready for that. I also offer some solutions. So I never paint a picture like that, which in my view is a very realistic picture. And I explain why and I explain how AI works in it. But I also give readers advice on how they can prepare for that, how they can see it coming and so sort of, not sort of, but still preserve wealth in that worst case scenario.

Cris Sheridan:
OpenAI itself in their latest model, in the safety testing that they conduct, it showed that it was more than capable of bending the rules, manipulating whatever it needed to do in order to achieve its objectives. What happens when we see a model as powerful as that which has already searched all of the regulations, all of the rules that exist and can now exploit those and can be weaponized? So this is where some of these things, concerns that you discuss in your book, come to the fore.

Jim Rickards:
That's right. A couple aspects of that. I spoke to one of the top guys in the field, very successful in Silicon Valley, started and sold numerous companies. So very immersed in Silicon Valley world, but also very immersed the intelligence community. He was CEO of a something called in Qtel, which is the CIA's venture capital firm. And actually in Qtel financed one of my early AI projects work with collaborators called Project Prophecy. We were predicting terrorist attacks based on insider trading ahead of the attacks. That's a whole, I wrote about that in my book the Death of Mining, but that's a whole separate scenario. But, but Gilman Finance that prototype project that we built at the CIA. So hard to find somebody more knowledgeable on both sides than Gilman. We had a, we had a very, very good conversation along the, along the lines you're discussing and absolutely the case that this technology will fall into the hands of bad actors. It already has. Those kinds of scenarios are not far fetched and investors need to, need to be prepared for them. But in particular with regard to national security, which is my other area of expertise, I have a whole chapter on nuclear war fighting and I started studying that in the late 60s. The scholars that I studied had done a lot of the work in the late 1950s, in the early 1960s. So is Roberta and Albert Wolstetter, Henry Kissinger, Paul Nitze and others. But the big brain, the leader in the field was Herman Kahn and he wrote a number of books on this. But he was the one who invented or articulated the escalation ladder. And he did start with 17 steps and ended up with like 45 steps or so. But his point is nobody wakes up and looks out the window, says, hey, nice day, I think I'll start a nuclear war and fires a nuclear missile. That's not how nuclear wars begin. The way they begin is there are two antagonists. One does something provocative, the other one answers back. The second one raises the ante. The first one, you know, sees you and raises the ante and you're going up an escalatory ladder towards nuclear war. And by the way, this is exactly what's happening in Ukraine. It's exactly what's happening in the Middle East. We, you know, we're not taking sides in that. We can take that to the bar. That's a separate debate. But the dynamic, there's no question that the dynamic on both sides in both wars is escalatory. And you got nuclear powers all around. Russia, the United States, Israel, Iran's not that far away, Pakistan's on the sidelines. So this has real potential to escalate into a nuclear war. So what do you do when you're in that situation? Herman Kahn says, first of all, recognize that that's where you are. Don't get caught up in the escalation. Say to yourself, we are escalating to nuclear holocaust number one. Number two, take a beat and then climb down deescalate back away from that outcome. That's exactly what we saw in the Cuban missile crisis, got dangerously close. But in the end, Khrushchev through a little, you know, lifeline to Kennedy. Kennedy picked it up, they talked it through, they de escalated and we avoided nuclear war. There are two cases I point to in the 80s and people don't realize how. How frequently that has happened. Exactly what we're talking about, which is beyond that being on that escalatory path. So one is, in the early 1980s, the Soviet Union, you know, caught Russia. But the Soviet Union at the time had a very primitive form of AI they called it a system called V Rya. And what it did, it looked at the Soviet Union and the US In a lot of dimensions, a lot of inputs, you know, economic growth, demographics, productivity, natural resources, military power, et cetera. And it recognized that the US Was ahead of the USSR in a lot of respects. But it looked at the gap, and the algorithm was the wider the gap, there would come a time when the gap would be so wide the US would launch a nuclear attack on Russia that we would feel that we were sufficiently strong, we could get the edge on them, we could withstand it, whatever. And they quantified that there was actual output. Well, it got to that level, was like, hey, the US Is now at the point where they may be ready to start a nuclear war. And the irony or the paradox of nuclear war fighting is if you think the other guy is going to shoot, you shoot first. Because everyone agrees that the first strike is more powerful. Whether you can survive a second strike, separate question. But if you think the other guy is going to shoot, you shoot first. And so that's. You escalate right into a nuclear war. So. So being on that path, but in. In this example I gave, the KGB had decided that the US Was likely to launch a nuclear war based on AI it so happened, unrelated, that the United States and NATO were conducting a war game around the same time. And the war game was a nuclear attack. They weren't planning a nuclear attack, but they were saying, well, how would that play out? How would we game it? There were heads of state involved, and Helmut Kohl in Germany and others, Margaret Thatcher and. But the KGB picked up the war game, but thought based on their other information, that the war game was a facade for a real attack, that the US Was getting ready to attack Russia. And there was a Lieutenant General Perutz in the US army who was responsible for the war game. And he saw the Russians getting ready. He saw them, you know, kind of putting their bombers on the runways and arming their missiles, et cetera. And he took it upon himself, against orders on his own initiative, to de escalate, to basically say, hey, time out on the war game. Let's back down, let's not do this, let's not play the final steps, et cetera. The Soviets detected It they stood down, and we avoided a nuclear war, but we were on a path to nuclear war. One other quick example, and then I'll kind of tie it together. There was another System, a Russian AI system, Soviet AI system codenamed Oko. Oko. It detected incoming U.S. nuclear missiles. The U.S. had launched an attack on Russia. And these missiles were coming in. And the system gave a launch order. It said, launch. And there was a lieutenant colonel in the Russian army who got that signal. His orders were, call his superiors and tell them the launch signal has been given. The doctrine is called launch on warning. You don't wait till you're hit. You see him coming in, you shoot back. You shoot first. Well, if not first, you don't wait to get hit. Launch on warning. So this Lt. Col. Petrov had worked on the system, and he knew it was a little buggy, and against orders, he took it upon himself not to launch, not to call his superiors. And it turns out that the signal was coming from the sun's reflection of some clouds, which hit the radar in a certain way that made it look like incoming missiles, but wasn't actually. But one of the things he said to himself, it picked up five missiles. He said, if the US were attacking, they wouldn't shoot five missiles, they'd shoot 200. So that sort of used an inferential method to say, that's probably a bug in the system. And he was later known as the man who saved the world. So we have two stories. One a US Lieutenant general, one a Soviet lieutenant colonel, disobeying orders, using common sense, their gut, and their intuition to avoid a nuclear war. And it worked in both cases. Now, here's the point. What they were doing is a branch of logic. So there's deductive logic, which goes back to Aristotle. So there's a major premise, a minor premise, and then a conclusion. And it's all very logical. It can be wrong if the premise is wrong but the logic is intact. There's inductive logic, which is you see a whole bunch of examples. It's everywhere you look. You infer, well, this must be the way it is. This must be normal, because I see it everywhere. It works until you go to Adelaide, Australia, and see a black swan. I've been down there. There are black swans down there. But it took a while for the Europeans to catch up on that. So there are flaws. But computers can easily be programmed to use deductive logic and inductive logic. But there's a third branch of logic that's called abductive. Logic, which was invented or discovered, articulated by Charles Sanders Pierce in the 19th century. The problem with Peirce, he was such a genius. I mean William James gives Peirce credit for preventing pragmatism, But Peirce was 100 years ahead of his time. So the problem with guys like that is it takes 100 years to realize they were right. But Peirce was the inventor of semiotics, but he also articulated abductive logic. Big branch of science is the root of kind of like a lot, a lot of philosophy today, semiotics, of course. But the simplest way to describe it is it's common sense. It's just common sense. It's not the other methods I described. Again, it's gut, it's common sense. It's being human. It's an intelligent guess, not a reckless guess, but an intelligence intelligent guess. That's what Perutz and Petrov were doing. They were using abductive logic to avoid a nuclear war. Here's the bottom line. Computers have never been programmed to use abductive logic. It seems likely that it's non programmable. That what I'm describing because it's so touchy feely and intuitive. It's non programmable. What that means is if you put AI in the nuclear kill chain, it's going to understand the escalatory logic. It's going to keep escalating, but it will lack empathy, sympathy, intuition, common sense, all the things I just described. It will lack the ability to de escalate. Not only that, but if both sides have it, it will accelerate the tempo and you can get into something called a flash war where you have a flash nuclear war. You don't even have time to think about is this, is this the right thing or should we back away so forth. So my advice to the Pentagon, I hope they're listening. I've met at the Pentagon many times. Don't put AI in the nuclear kill chain. Don't do it if you want to have it as a resource or some other adjunct maybe, but don't put it in the kill chain because you'll get killed. You'll start a nuclear war. You won't be able to stop it in the way that humans going back to Kenyan Khrushchev have been able to. So not trying to, it's not doom and gloom, not trying to scare people. I am trying to explain to people how AI is full of dangers and we're kind of ignoring them in the rush to implement it.

Cris Sheridan:
Absolutely. And that's one aspect of your book is talking about how you want to have humans in the loop, like you just laid out in those examples. Clearly if humans are out of the loop and you have like this escalatory situation, things could spire out of control very quickly. But you also talk about how, you know, there's also another fundamental flaw to AI, if we want to call it that, and that's the fact that it is highly trained on humans and humans have all sorts of different flaws, heuristics, biases that they use for their own thinking, inferences that they make. And we see AI hallucinations that are ongoing. And you talk about how that relates to propaganda. I would love if you could share that part of your book because that's really interesting.

Jim Rickards:
Sure. This is chapter five, whole chapter. It's biases, censorship and confabulation, which is a fancy word for lying. So, but, but that's, that's exactly right. So we all have biases. You do, I do. Everybody does. It's part of human nature. Most biases are really good. They have kept us alive. We probably would have gone extinct without them. So if I'm a Paleolithic hunter and I have a strong bias against saber toothed tigers, I never want to see one that'll probably keep me alive. I'm not going to go hunting where the saber tooth tigers might be hanging out. That's bias. I don't know if there are any there or not, but. But it keeps me alive and I can feed my family and it's kept civilization alive. So we're full of biases. They're mainly good. They, they help keep us alive. Whether we can articulate them or not, that's a separate issue. But it's a good thing. Some biases are repugnant. You know, racism is repugnant, sexism, et cetera. Nobody's depending on, they're real, they exist. Hopefully they're going away, but. But they exist. But they're repugnant biases. So now you have to ask yourself, who were the gatekeepers of AI and this. And here, Chris, we're really more into GPT a little bit. You know the answers to the prompts and questions and so forth. So the gatekeepers are Google Meta, which is Facebook, YouTube, which is owned by, I think they're owned by Facebook OpenAI, which is private, not a public company. Microsoft, Apple, There's a short list of, of gatekeepers. But think about who they are. These are the people who for the last five years they've lied about climate change, they've lied about COVID vaccines, which don't work. They've lied about masks, which don't work. Social distancing was made up by late 19th century German scientists who said, I just said six feet sounded good. I had no proof whatsoever. D.A. Henderson, the greatest virologist in history, who won the Presidential Medal of Freedom and was dean of the Bloomberg School of Public Health at Johns Hopkins, wrote a paper in 2006 at the time of the avian flu because George Bush was very concerned about. He said, let's drill down on this. D.A. Henderson, number one virologist, the man credited with eradicating smallpox on the planet Earth, wrote a paper said lockdowns don't work. And here's why. He explained it so we didn't have to wait till 2021 to learn that they didn't work. We had the evidence from the number one authority in 2006, but we ignored it. We went. We went ahead with it. So my point is whether it's a climate change green new scam, Covid war in Ukraine, which the Russians are winning decisively. I can go down a long list of issues. They lied about every one of them. If you were putting up the truth or just trying to have a conversation, you were deplatformed. De ranked, banished cancer.

Cris Sheridan:
Covid coming from the Wuhan Institute of Virology, for example. That was one of the ones that got sensitive answered early on.

Jim Rickards:
Yeah. And I've been to Wuhan, so I know the feel of the place. It definitely came. Yeah. So it came from the Institute of Biology. Well, the Institute of Virology is there. It's a bioweapons lab. And Covid broke out like a block away, but somehow we're supposed to think it was a pangolin or a bat. Right. But the point is, if those people have lied to us about the most important public policy issues in the last five years or longer, why should we believe them in GPT? Why should we believe them in AI? They don't. They have no credibility. And let me again give some concrete examples. I don't. I always. Everything in the book is backed up. It's got 200 footnotes. So I always say it's worth the price of the book just to get the. Get the research. So somebody. So Google rolled out a GPT app called Gemini. It's a few months ago, it's fairly recent. It's the last thing I got in the book. And some user gave it a prompt. The prompt is the question. And the question was give me an image of a Pope. That was the question. And the, the GPT came back with images of women in papal vestments and a shaman. Well, I happen to be a Catholic. I study my history. There have been 266 Popes in 2000 years, and every one of them was a man. Now, again, if you want to debate the role of women in the church again, just take you to the bar. But it's a fact that there have been 266 popes, and they're all men. Some. Then the guy continued, he said, give me a picture of a Viking, and it came back with a black Viking. If you've been to Scandinavia, you know, they're, they're very pale and very blonde. There are no black Vikings. So, so then people say, oh, they see the system's malfunctioning is buggy, and Sergey Brandon comes out, goes, yeah, we kind of screwed it up. No, the system worked perfectly.

Cris Sheridan:
That was a feature function, right?

Jim Rickards:
It was, it was a feature. That's exactly right. But how, what, what was going on behind the curtain? There's something called prompt injection. And prompt injection means you give it the prompt and the system is programmed to inject another element and then produce responses that reflect that injection, that other element. So, for example, I say, give me a picture of a Pope, but the computer hears, give me a picture of a Pope. In a world of diversity, equity and inclusion, where everyone has an equal chance, well, if you ask that question, get a female pope and probably a black Viking. The point is, the computer did not malfunction. It worked the way it was programmed, but the program was designed to eliminate what it considered to be bias. And here are the problems with this. Number one, the answer to bad biases, and there are repugnant biases. No one denies. That is education, subject matter, expertise, debate, conversation. Confront it, don't deny it, but deal with it and say, hey, this is a really bad thing, but know that because you have critical thinking skills. But what the Google and other engineers are doing, they're trying to erase it. So now this is George Orwell sending stuff down the memory hole. First of all, they're misinforming and misleading the public because they're erasing a part of history whether you like it or not. Number two, they're substituting their own biases for the other biases. Well, who, who says their biases are any less repugnant than the ones they think they're getting rid of? So my point is, let it be like, ask the question. Get the best answer you can from the training materials. Don't cover Anything up, don't erase anything. If it's repugnant, say so. Say, hey, that's pretty bad. But at least I got my history right. At least I know what to look out for. So then we get into censorship. And one last point, which I. I've called it the. The Puppy Theory of. Of AI or GPT. And the notion is, if you have a pup, if you're training a puppy to fetch a ball, and you throw the ball and the puppy goes and gets it and brings it back, you do. You give it a pat on the head or a puppy treat or something, some positive reinforcement about, hey, you got the ball. GPT is an AI Are the same way. They want to please the master. They want a pat on the head. So another example, there was a reporter. He gave the prompt, he said, please give me a biography of. And he put it in his own name. He's asking for his own biography from the. From the GPT. So the machine does his thing, comes back and had a very good biography. You know, born such and such a day, grew up here, went to the school, did this, et cetera. I said, and he died in 2019. And the guy said, what's interesting? I'm still alive. I wonder where that came from. So he kept prompting. He said, well, what did he die of? You know? And the computer gave it an answer and so on. So the compute completely invented. It had a blank. It invented it and filled it in because it wanted a pat on the head. Now, here's. Here's the punchline. The writer was the obituary writer for the Daily Mail. And so he had written thousands of obituaries. So the computer, if, you know, I'm sure you do know how word clouds work. It found his name in close association with all these obituaries. So it said, he must be dead. No, he was an obituary writer. But it's an example of how GPT will just make things up, fill in the blanks in order to complete a mosaic, even if it's completely wrong. Now, if you're the guy asking about himself or you're a subject matter expert on some other topic, you can usually spot this stuff. But I asked the question, well, if you have to be a subject matter expert to spot the flaw in the GPT output, what good is it to begin with? You know, it's not. It's not helping you. You got to do extra work to figure it out. What if you're not. But if you're not a subject matter expert and you take it at face value, now you're being misled. Miseducated, as the case may be. So it's another area where it's just. It's just efficient. Can it improve? Yeah, that. That kind of thing probably could improve. But the other problem is, you say there are biases in ancient history or recent history or the 1960s. Yes, but what. Most of the training set in the Internet was generated in the last 10 years. Not all of it. You know, the works of Shakespeare are there. They're older, but a lot of it is new stuff which is marinated in new biases. You know, wokeness, DEI and esg. Correct. That's in the training set now. And so you're actually at the point where GPT is training on a whole new set of things that are very bad for investors. By the way, as you know, there's evidence ESG funds have underperformed simple index funds and everyone's closing down their DEI departments. Charlie Gasparino's book Go Woke Go Broke, and he's right, there are many examples of that. So my point is, whether it's AI systems in the nuclear kill chain, AI systems in stock market decisions and portfolio allocations, or simple GPT making things up, there are. There are dangers, there are. There's misleading information, and there are really very dangerous recursive functions, feedback loops, basically, that could lead the market straight to disaster because it'll imitate the part of humans that want to sell everything and get out, but it will lack the part of humans that use a little judgment and gut instinct, say, you know, maybe it's. Maybe it's time to buy.

Cris Sheridan:
Well, again, I think that there's been a huge blind spot by most people when thinking about how AI is going to work its way out through our world and some of the risks that are presented, particularly when we think about the global financial markets. And as you relate that to the capital markets and as well as national security, you lay out some very fascinating possible scenarios in your book. And obviously you are an expert in a lot of these various areas with your background. So as we close today, I want to give out the title of the book again. It's Money, GPT, AI and the Threat to the Global Economy. And it's coming out November 12, 2024. Jim, it was a pleasure to have you on our show. We look forward to speak with you in the future.

invest with us
.
apple podcast
spotify
randomness