I do know something about AI, having spent about 10 years in the IT field and played around with developing basic neural networks for fun.
This article is spot on. AGI is likely a pipe dream. I believe that for both technical and philosophical reasons. For postliberals who actually want to try and improve the world, the AI-related low-hanging fruit is erotic chatbots and deepfake video.
Erotic AI chatbots can already produce a Choose-Your-Own-Adventure style narrative, but the story never ends and can be adapted instantly to any fantasy you have. It is INCREDIBLY addictive. I respect if that sounds like a minor problem, but with a birthrate of 1.6 already, what percentage of men will choose this Matrix-like world, particularly as it "improves"? Right now it's text with a few AI generated pictures; we're maybe 3 years away from being able to do the same with real-time, interactive video. You think today's porn and video games are bad; you have no idea what's coming. If you believe in virtue at all, this requires your attention.
The same AI video technology can already produce CCTV level deepfakes that are indistinguishable from the real thing. Within that same 3 year timeframe, it will be able to produce broadcast quality deepfakes as well. A picture will no longer be worth 1000 words; in terms of evidence of truth, it will be worth nothing: in newspapers, on the web, or in court. We're going back to 1800's information ecosystem, where the only way you knew whether something actually happened was the past veracity of the person who told you about it (be it a friend or a reporter.)
There is nothing we can do about AGI. If it's possible, someone will do it, and we'll all have to deal. But there are lots of things that could be done on these two fronts that could improve the world today, right now, and the AGI debate is a distraction from doing them.
"A community determined to demonstrate how great its models are should be much more eager to create a much broader set of benchmarks simulating real-world challenges and scenarios, and demonstrating to the world how the models perform."
For instance, could an AGI ever come up with the idea of factories in the countryside that run on part-time jobs as a potential solution to a myriad of social problems: retirement, technological unemployment, housing, childcare, marriage stability, the slowdown in total factor productivity, etc.?
Quite a good piece, and I do/did have a technology background, (I am retired). In regards to AGI, I am a little surprised that no one mentions the Turing test, which was developed by Alan Turing, as a meaningful benchmark for assessing AGI. It is straight forwardly simple. One communicates with an entity that is not seen or heard, it is behind a door so to speak. The tester queries the entity and carrys on a converstation and tires to determine if it is another human or AI. When one can't tell the difference AI has passed the test. In my own experience with chatGPT, there always questions that easily trip it up. As the piece points out the expertise window is rather narrow. An additional issue is power consumption. The requirements are enormous. It takes a certain amount of mass to produce the unit of energy to compute a byte. David Foster Wallace cited an individual who calculated that if the entire mass of the planet where devoted to computational power it tops out at 2.54 x 10^192 concurrent one byte computuaions, if I am recollecting correctly. In other words, the number of "smart AI brains" that can be operated and at what cost are highly relevant and bounded questions.
Here’s a concern I have. What if you applied those three tests to the development of the atomic bomb? I dare say we they would not have helped us to see what was coming.
Here are three tests I would suggest we consider (with the splitting of the atom in the back of my mind):
What is the worst case scenario if the claims turned out to be true?
Why are some of the brightest minds in the field issuing grave warnings about its potentially devastating effects?
How much capital and energy is being applied to this development of this new technology?
Graphically representing the splitting of the atom scenario would likely be represented by some kind of S curve as well, but it would be a dangerously dangerous S curve indeed. Although the basic principles still apply, nuclear weapons are now 1000 times more powerful than the bombs dropped on Hiroshima and Nagasaki. As regards the development of nuclear weapons, TTID certainly applied. Could an AI convince a rogue world leader that some new delivery system would allow them to win a nuclear war by launching a first strike? I’d be afraid to even ask it to that question.
Apocalyptic scenarios aside, artificial intelligence may already be quietly wreaking havoc as the technology evolves making it difficult (and eventually impossible) to distinguish that which is real from that which is not. As a result, trust in our systems and institutions are crumbling. As trust erodes civilization is at risk of a complete breakdown.
Perhaps not that interesting, but I fear that AI may be much more than a parlour trick and it seems to me potentially dangerous to view it that way.
Without a doubt, the most cogent and sobering description of AI that I have read thus far. I remember my first dot.com clients in 1997 (I was a database marketing consultant), who told me they were going to put all the "bricks and mortar" retailers out of business.
Didn't quite happen that way. Don't get me wrong... dot.commerce is a fantastic thing (who doesn't order from Amazon), but it compliments physical stores, not replaces them.
Artificial sweeteners,artificial flowers, artificial butter, artificial diamonds, artificial limbs, and artificial intelligence each is a Pandora’s Box…Hope is what is being sold, but little or no mention of the problems that might arise from the ramifications of the potential life altering contrivance.
“Tiger got to hunt, bird got to fly
Man got to sit and wonder why, why, why.
Tiger got to sleep, bird got to land.
Man got to tell himself he understand.”
…..and so it goes
Would we not be better served cultivating our innate intelligence without artificial interference?
Rather than any contrary reaction, I liked the sobriety of your analysis. Yes, AI will produce a lot but it will be a lot along the parallel lines you have drawn. The p/e ratios my drop from 50 to 35 but that would not be a bad thing.
In general, I'm with you that the LLMs thus far are more of a parlor trick than transformative technology, but my fears are twofold:
1) People are spending biblical amounts of money to develop them - that's certainly not proof that the technology will be worth the investment, but it's at least evidence that many people with the credibility to generate multi-billion dollar investments are choosing this approach
2) These same people seem to throw out casual statements like "there's a 20-25% chance these technologies will take over and enslave humanity", and I don't have much trust in our leadership class to have thoughtful approaches to how to minimize this risk.
Not sure there's anything we can do as ordinary people, so I don't let it bother me too much, but it's kind of crazy how little debate there's been about whether we should go down this path.
In the 90’s there was a lot of premature hype about the internet. It happened in a big way but it took longer than people and stock prices thought it would. I expect the same is true with AI. We are currently in the hype cycle.
AI probably won't destroy humanity but something will even if we have to wait for the Big Bang to implode. The Sun will probably supernova first or genocidal space aliens will arrive.
In the meantime, AI is wonderful for compilation of enemy lists or collapsing the electric grid. I can imagine some positive uses like medical diagnosis or engineering design that will make the process faster, cheaper and more accurate. For now the biggest impact on the private sector seems to be in HR which is interesting since there is so much natural stupidity there.
I do know something about AI, having spent about 10 years in the IT field and played around with developing basic neural networks for fun.
This article is spot on. AGI is likely a pipe dream. I believe that for both technical and philosophical reasons. For postliberals who actually want to try and improve the world, the AI-related low-hanging fruit is erotic chatbots and deepfake video.
Erotic AI chatbots can already produce a Choose-Your-Own-Adventure style narrative, but the story never ends and can be adapted instantly to any fantasy you have. It is INCREDIBLY addictive. I respect if that sounds like a minor problem, but with a birthrate of 1.6 already, what percentage of men will choose this Matrix-like world, particularly as it "improves"? Right now it's text with a few AI generated pictures; we're maybe 3 years away from being able to do the same with real-time, interactive video. You think today's porn and video games are bad; you have no idea what's coming. If you believe in virtue at all, this requires your attention.
The same AI video technology can already produce CCTV level deepfakes that are indistinguishable from the real thing. Within that same 3 year timeframe, it will be able to produce broadcast quality deepfakes as well. A picture will no longer be worth 1000 words; in terms of evidence of truth, it will be worth nothing: in newspapers, on the web, or in court. We're going back to 1800's information ecosystem, where the only way you knew whether something actually happened was the past veracity of the person who told you about it (be it a friend or a reporter.)
There is nothing we can do about AGI. If it's possible, someone will do it, and we'll all have to deal. But there are lots of things that could be done on these two fronts that could improve the world today, right now, and the AGI debate is a distraction from doing them.
Fantastic write up, captures precisely how I have felt about the technology in a much more elegant way than I could have written
"A community determined to demonstrate how great its models are should be much more eager to create a much broader set of benchmarks simulating real-world challenges and scenarios, and demonstrating to the world how the models perform."
For instance, could an AGI ever come up with the idea of factories in the countryside that run on part-time jobs as a potential solution to a myriad of social problems: retirement, technological unemployment, housing, childcare, marriage stability, the slowdown in total factor productivity, etc.?
My guess is only after reading about it in a book written by an actual human being: https://www.amazon.com/dp/B00U0C9HKW
As to the more general question, never bet against the law of diminishing returns.
Quite a good piece, and I do/did have a technology background, (I am retired). In regards to AGI, I am a little surprised that no one mentions the Turing test, which was developed by Alan Turing, as a meaningful benchmark for assessing AGI. It is straight forwardly simple. One communicates with an entity that is not seen or heard, it is behind a door so to speak. The tester queries the entity and carrys on a converstation and tires to determine if it is another human or AI. When one can't tell the difference AI has passed the test. In my own experience with chatGPT, there always questions that easily trip it up. As the piece points out the expertise window is rather narrow. An additional issue is power consumption. The requirements are enormous. It takes a certain amount of mass to produce the unit of energy to compute a byte. David Foster Wallace cited an individual who calculated that if the entire mass of the planet where devoted to computational power it tops out at 2.54 x 10^192 concurrent one byte computuaions, if I am recollecting correctly. In other words, the number of "smart AI brains" that can be operated and at what cost are highly relevant and bounded questions.
Solid article about the real potential (good and bad) of AI. FYI a thoughtful book about AI for the layman is "Taming Silicon Valley" by Gary Marcus.
You should check out this video. A good supplement to the arguments you make here.
https://youtu.be/EOS3JkKmjm8?si=CobArznd8XsvFP2l
Here’s a concern I have. What if you applied those three tests to the development of the atomic bomb? I dare say we they would not have helped us to see what was coming.
Here are three tests I would suggest we consider (with the splitting of the atom in the back of my mind):
What is the worst case scenario if the claims turned out to be true?
Why are some of the brightest minds in the field issuing grave warnings about its potentially devastating effects?
How much capital and energy is being applied to this development of this new technology?
Graphically representing the splitting of the atom scenario would likely be represented by some kind of S curve as well, but it would be a dangerously dangerous S curve indeed. Although the basic principles still apply, nuclear weapons are now 1000 times more powerful than the bombs dropped on Hiroshima and Nagasaki. As regards the development of nuclear weapons, TTID certainly applied. Could an AI convince a rogue world leader that some new delivery system would allow them to win a nuclear war by launching a first strike? I’d be afraid to even ask it to that question.
Apocalyptic scenarios aside, artificial intelligence may already be quietly wreaking havoc as the technology evolves making it difficult (and eventually impossible) to distinguish that which is real from that which is not. As a result, trust in our systems and institutions are crumbling. As trust erodes civilization is at risk of a complete breakdown.
Perhaps not that interesting, but I fear that AI may be much more than a parlour trick and it seems to me potentially dangerous to view it that way.
Without a doubt, the most cogent and sobering description of AI that I have read thus far. I remember my first dot.com clients in 1997 (I was a database marketing consultant), who told me they were going to put all the "bricks and mortar" retailers out of business.
Didn't quite happen that way. Don't get me wrong... dot.commerce is a fantastic thing (who doesn't order from Amazon), but it compliments physical stores, not replaces them.
Want to go on a deep dive on this subject…see Frank Wright’s post today…Elon Musk - digital Bonaparte
To answer the title of this post: Yes.
Artificial sweeteners,artificial flowers, artificial butter, artificial diamonds, artificial limbs, and artificial intelligence each is a Pandora’s Box…Hope is what is being sold, but little or no mention of the problems that might arise from the ramifications of the potential life altering contrivance.
“Tiger got to hunt, bird got to fly
Man got to sit and wonder why, why, why.
Tiger got to sleep, bird got to land.
Man got to tell himself he understand.”
…..and so it goes
Would we not be better served cultivating our innate intelligence without artificial interference?
Rather than any contrary reaction, I liked the sobriety of your analysis. Yes, AI will produce a lot but it will be a lot along the parallel lines you have drawn. The p/e ratios my drop from 50 to 35 but that would not be a bad thing.
In general, I'm with you that the LLMs thus far are more of a parlor trick than transformative technology, but my fears are twofold:
1) People are spending biblical amounts of money to develop them - that's certainly not proof that the technology will be worth the investment, but it's at least evidence that many people with the credibility to generate multi-billion dollar investments are choosing this approach
2) These same people seem to throw out casual statements like "there's a 20-25% chance these technologies will take over and enslave humanity", and I don't have much trust in our leadership class to have thoughtful approaches to how to minimize this risk.
Not sure there's anything we can do as ordinary people, so I don't let it bother me too much, but it's kind of crazy how little debate there's been about whether we should go down this path.
In the 90’s there was a lot of premature hype about the internet. It happened in a big way but it took longer than people and stock prices thought it would. I expect the same is true with AI. We are currently in the hype cycle.
AI probably won't destroy humanity but something will even if we have to wait for the Big Bang to implode. The Sun will probably supernova first or genocidal space aliens will arrive.
In the meantime, AI is wonderful for compilation of enemy lists or collapsing the electric grid. I can imagine some positive uses like medical diagnosis or engineering design that will make the process faster, cheaper and more accurate. For now the biggest impact on the private sector seems to be in HR which is interesting since there is so much natural stupidity there.