Subscriber Resources:

Listen to the Interview MP3 audio file

The Solari Report 2018-08-09

Read the Transcript

Read the transcript of The Artilect War: Will AI be the Death of Us?, with Dr. Hugo de Garis (PDF)

Download mp3 audio file

This week’s Money & Markets segment can be found here.

Related Reading:

Battlestar Galactica

Deep learning on Wikipedia

Artificial intelligence on Wikipedia

Fermi paradox on Wikipedia

The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines by Hugo de Garis

The Boundaries of Natural Science by Rudolf Steiner

The Real Reason to be Afraid of Artificial Intelligence – Peter Haas

Artificial Intelligence: it will kill us – Jay Tuck

THE ARTILECT WAR: Bitter Conflict over Whether Man or Machine Should Rule, Will Lead to a Gigadeath “Artilect War”

“The estimated bit processing rate of the human brain is approximately 10^16 bit flips per second…. a hand held artilect could flip at 10^40 bits per second. An asteroid sized artilect could flip at 10^52 bits a second. Thus the raw bit processing rate of the artilect could be a trillion trillion trillion (10^36) times greater than the human brain. If the artilect can be made intelligent, using neuroscience principles, it could be made to be truly godlike, massively intelligent and immortal” ~Dr. Hugo de Garis

By Catherine Austin Fitts

This week on The Solari Report, Harry Blazer speaks to Dr. Hugo de Garis about his book, The Artilect War: Cosmists vs. Terrans – A Bitter Controversy Concerning Whether Humanity Should Build Massively Intelligent Godlike Machines, and about the dangers of artificial intelligence or AI.

Dr. de Garis is Australian. He studied theoretical physics and then began developing artificial intelligence. According to Wikipedia, in 1992 he received his PhD from Université Libre de Bruxelles, Belgium. He worked as a researcher at ATR (Advanced Telecommunications Research institute international, 国際電気通信基礎技術研究所), Japan from 1994–2000, a researcher at Starlab, Brussels from 2000–2001, and associate professor of computer science at Utah State University from 2001–2006. Until his retirement in late 2010 he was a professor at Xiamen University [Fujian, China], where he taught theoretical physics and computer science, and ran the Artificial Brain Lab. He is now in the process now of moving back to his native Australia.

Dr. de Garis has serious reservations about the risks associated with artificial intelligence. For the first part of this Solari Report, he and Harry discuss the nature of artificial intelligence and the associated risks. In the process, Harry becomes intrigued with integrating the discussion with other developments in global governance and geopolitics and outlines some of this thoughts on the application of AI to further centralize political and economic control. I have asked our team to make a special video for posting on Thursday of that 20-minute plus section in addition to our regular audio. We will also post Harry’s recommended subscriber links.

In Money & Markets this week I will discuss the latest in financial and geopolitical news from Chartres, France. You can post questions, comments and suggested stories here.

In Let’s Go to the Movies, I will review one of my all-time favorites, the TV series Battlestar Galactica. The human race faces extinction from the attack on the Twelve Colonies of the cybernetic Cylons. A battlestar and a small fleet of surviving humans escape and set out to find the planet Earth while continuing to fight off their Cylon attackers.

Talk to you Thursday!

76 Comments

  1. Had the opportunity to hear Dr. Hugo de Garis give a presentation in Branson on the coming Artilect War. In person he is a remarkable man, he is animated, humorous and has passion on a very serious topic of the approaching Artilect Wars. Dr. Hugo reminds me of Professor Irwin Corey (remember him)on his delivery with a restraint matter. The 3000 people in the audience got his message. The world is about to change in a big way unless the powers decide the close the pandora box that has been pried open by technology. Can they? They were able to come up with the limited test ban treaty pledging to refrain from testing nuclear weapons in the atmosphere or in outer space when Soviet Union almost set the atmosphere on fire. Maybe an agreement can be made about curtailing the development of artificial intelligence? I would not put my hopes on seeing that happening despite Dr. Hugo believing we are in a war for the survival of our species.

    Just thinking out loud.
    Could it be that all this push for gun control the last 20 plus years has to do with disarming the humans (terrans) so not to thwart the advancement of A.I.

    Catherine, please keep in touch with this marvelous man.

    1. Will be posting a review of AI Superpowers shortly. Will strongly recommend on the topic of AI. The Chinese are going to drive forward on AI as fast as they can go. No stopping it, is what it looks like to me.

      The gun control battle is very important. However, the people fighting to protect guns have failed to grapple with mind control and invasive digital systems. This is one of the reasons we published Control 101. To get the full spectrum of citizens to look at an integrated piece.

  2. Had the opportunity to hear Dr. Hugo de Garis give a presentation in Branson on the coming Artilect War. In person he is a remarkable man, he is animated, humorous and has passion on a very serious topic of the approaching Artilect Wars. Dr. Hugo reminds me of Professor Irwin Corey (remember him)on his delivery with a restraint matter. The 3000 people in the audience got his message. The world is about to change in a big way unless the powers decide the close the pandora box that has been pried open by technology. Can they? They were able to come up with the limited test ban treaty pledging to refrain from testing nuclear weapons in the atmosphere or in outer space when Soviet Union almost set the atmosphere on fire. Maybe an agreement can be made about curtailing the development of artificial intelligence? I would not put my hopes on seeing that happening despite Dr. Hugo believing we are in a war for the survival of our species.

    Just thinking out loud.
    Could it be that all this push for gun control the last 20 plus years has to do with disarming the humans (terrans) so not to thwart the advancement of A.I.

    Catherine, please keep in touch with this marvelous man.

    1. Will be posting a review of AI Superpowers shortly. Will strongly recommend on the topic of AI. The Chinese are going to drive forward on AI as fast as they can go. No stopping it, is what it looks like to me.

      The gun control battle is very important. However, the people fighting to protect guns have failed to grapple with mind control and invasive digital systems. This is one of the reasons we published Control 101. To get the full spectrum of citizens to look at an integrated piece.

  3. Late to this party, but I wanted to drop a comment on the interview style – I thought the beginning through the middle of the questions that Harry posed were excellent. However, after about the midway point, Harry began to lecture Dr. de Garis. I was disappointed in this approach. Even though I disagree with several of Dr. de Garis’s premises and conclusions , I wanted to know more about his thoughts on the upcoming AI tsunami.

    The thought, I believe is attributed to Aristotle – intelligence is the ability to maintain two paradoxical ideas at the same time without blowing up – came to mind several times during the latter part of this interview. There is no need, imo of course, to try to convert the guests to your point of view. Enable your guests to completely flesh their ideas out.

    Groupthink is a danger for all viewpoints and groups, the Solari group is not an exception. I think we are best served by avoiding recourse to the need to state positions that everyone here (likely) agrees with.

    1. Good point about the dangers of groupthink…!

      Now even later to this party — regarding “the ability to maintain two paradoxical ideas at the same time” (under *any* conditions), I perceive that Harry Blazer knew *exactly* what he was doing when he asked, “Is it possible for AI to be irrational or illogical?”

      And with the response, “Oh God! Now you are asking me Gödel type questions. That’s a really tough question. I think I’ll pass,” I fear that Dr. Hugo de Garis summarily dismissed himself from any serious consideration as a sincere/principled “brain-builder” or “Brain Architect” — at least in the view of any genuinely qualified mathematician I can think of. (For those unfamiliar with Gödel’s theorems, they deal with mathematical inconsistency, or paradox.)

      I mean, by now (May2023) even casual AI users/consumers know about the irrational/illogical/paradoxical phenomenon of “AI hallucinations,” which even Dr. de Garis would understand perfectly with only casual reflection upon Gödel’s proofs, which he surely remembers from his own mathematics study and teaching.

      I sense Harry’s determined (like Kurt Gödel’s) pursuit of the Truth, and Dr. de Garis’s apparent discomfort with that — i.e., viz., the truth about the future reality we are choosing/creating/building for our posterity…

      However — happily for Dr. de Garis, and very much to his credit — Harry’s unrelenting masterful “lecture” eventually elicited Dr. de Garis’s admission to his own pursuit of ultimate Truth…: “I’ve learned so much math and physics, now I’m deeply, deeply suspicious that the laws of physics […themselves…] have been engineered, because they’re so mathematical.” And finally that “I probably enjoyed […this conversation…] a lot more because, remember, I live in a cultural cocoon.”

      Yes, AI thoroughly fascinates us, but let it neither distract nor deter us from pursuing the Truth…!

  4. Late to this party, but I wanted to drop a comment on the interview style – I thought the beginning through the middle of the questions that Harry posed were excellent. However, after about the midway point, Harry began to lecture Dr. de Garis. I was disappointed in this approach. Even though I disagree with several of Dr. de Garis’s premises and conclusions , I wanted to know more about his thoughts on the upcoming AI tsunami.

    The thought, I believe is attributed to Aristotle – intelligence is the ability to maintain two paradoxical ideas at the same time without blowing up – came to mind several times during the latter part of this interview. There is no need, imo of course, to try to convert the guests to your point of view. Enable your guests to completely flesh their ideas out.

    Groupthink is a danger for all viewpoints and groups, the Solari group is not an exception. I think we are best served by avoiding recourse to the need to state positions that everyone here (likely) agrees with.

Comments are closed.