Musk: SpaceX and Starlink don’t use artificial intelligenceDuring an interview at a recent conference Elon Musk admitted that both SpaceX and Starlink have found artificial intelligence (AI) lacking, and don’t use it at all.
The irony was that prior to this admission, Musk had been extolling AI’s potential, predicting it would someday do wonderful things.
Musk, who answered questions during 27th annual Milken Institute Global Conference on Monday, spent a sizable portion of his talk extolling the benefits of artificial intelligence. At one point, he said a “truth-seeking” AI could “foster human civilization” when asked about the role the technology would play in human’s everyday lives.
But when asked whether AI could “accelerate” his efforts in space exploration, he seemed less excited about the technology. …”I mean, oddly enough, one of the areas where there’s almost no AI used is space exploration,” Musk replied. “So SpaceX uses basically no AI, Starlink does not use AI. I’m not against using it. We haven’t seen a use for it.”
Musk continued, saying that he’s been testing improved AI language models by asking them questions about space — and the results have been disappointing. “With any given variant of or improvements in AI, I mean, I’ll ask it questions about the Fermi paradox, about rocket engine design, about electrochemistry — and so far, the AI has been terrible at all those questions,” Musk said.
Here we see the visionary meet the practical engineer/businessman. Musk always looks to the future with grand visions, but when it comes time to build those visions, he never allows his vision to interfere with practicality. AI is still essentially garbage-in-garbage-out. The rush by businesses and tech-firms to blindly use has resulted in more than a few disasters.
Musk doesn’t do anything blindly. He tested AI first, found it wanting, and thus put it aside, despite believing it will someday do wonderful things.
If only more companies used this approach. If they had, they might not have blindly pushed DEI and ESG requirements that have done nothing but harm to their companies, their work forces, and their bottom lines.
On Christmas Eve 1968 three Americans became the first humans to visit another world. What they did to celebrate was unexpected and profound, and will be remembered throughout all human history. Genesis: the Story of Apollo 8, Robert Zimmerman's classic history of humanity's first journey to another world, tells that story, and it is now available as both an ebook and an audiobook, both with a foreword by Valerie Anders and a new introduction by Robert Zimmerman.
The print edition can be purchased at Amazon or from any other book seller. If you want an autographed copy the price is $60 for the hardback and $45 for the paperback, plus $8 shipping for each. Go here for purchasing details. The ebook is available everywhere for $5.99 (before discount) at amazon, or direct from my ebook publisher, ebookit. If you buy it from ebookit you don't support the big tech companies and the author gets a bigger cut much sooner.
The audiobook is also available at all these vendors, and is also free with a 30-day trial membership to Audible.
"Not simply about one mission, [Genesis] is also the history of America's quest for the moon... Zimmerman has done a masterful job of tying disparate events together into a solid account of one of America's greatest human triumphs."--San Antonio Express-News
During an interview at a recent conference Elon Musk admitted that both SpaceX and Starlink have found artificial intelligence (AI) lacking, and don’t use it at all.
The irony was that prior to this admission, Musk had been extolling AI’s potential, predicting it would someday do wonderful things.
Musk, who answered questions during 27th annual Milken Institute Global Conference on Monday, spent a sizable portion of his talk extolling the benefits of artificial intelligence. At one point, he said a “truth-seeking” AI could “foster human civilization” when asked about the role the technology would play in human’s everyday lives.
But when asked whether AI could “accelerate” his efforts in space exploration, he seemed less excited about the technology. …”I mean, oddly enough, one of the areas where there’s almost no AI used is space exploration,” Musk replied. “So SpaceX uses basically no AI, Starlink does not use AI. I’m not against using it. We haven’t seen a use for it.”
Musk continued, saying that he’s been testing improved AI language models by asking them questions about space — and the results have been disappointing. “With any given variant of or improvements in AI, I mean, I’ll ask it questions about the Fermi paradox, about rocket engine design, about electrochemistry — and so far, the AI has been terrible at all those questions,” Musk said.
Here we see the visionary meet the practical engineer/businessman. Musk always looks to the future with grand visions, but when it comes time to build those visions, he never allows his vision to interfere with practicality. AI is still essentially garbage-in-garbage-out. The rush by businesses and tech-firms to blindly use has resulted in more than a few disasters.
Musk doesn’t do anything blindly. He tested AI first, found it wanting, and thus put it aside, despite believing it will someday do wonderful things.
If only more companies used this approach. If they had, they might not have blindly pushed DEI and ESG requirements that have done nothing but harm to their companies, their work forces, and their bottom lines.
On Christmas Eve 1968 three Americans became the first humans to visit another world. What they did to celebrate was unexpected and profound, and will be remembered throughout all human history. Genesis: the Story of Apollo 8, Robert Zimmerman's classic history of humanity's first journey to another world, tells that story, and it is now available as both an ebook and an audiobook, both with a foreword by Valerie Anders and a new introduction by Robert Zimmerman.
The print edition can be purchased at Amazon or from any other book seller. If you want an autographed copy the price is $60 for the hardback and $45 for the paperback, plus $8 shipping for each. Go here for purchasing details. The ebook is available everywhere for $5.99 (before discount) at amazon, or direct from my ebook publisher, ebookit. If you buy it from ebookit you don't support the big tech companies and the author gets a bigger cut much sooner.
The audiobook is also available at all these vendors, and is also free with a 30-day trial membership to Audible.
"Not simply about one mission, [Genesis] is also the history of America's quest for the moon... Zimmerman has done a masterful job of tying disparate events together into a solid account of one of America's greatest human triumphs."--San Antonio Express-News
It’s sad how many great American companies have become dysfunctional to the point of irrelevancy or even become dangerous.
I’m betting if a SpaceX manager wrote a paper for a typical MBA class of how SpaceX did things, the professor would fail them.
Lets be blunt about this.
AI is still too stupid to be anything other than a fancy search engine and or a stupid print editor.
Re: pzatcho
“”AI is still too stupid to be anything other than a fancy search engine and or a stupid print editor.””
Very well said! I have been trying to find a good, succinct way to sum up all this drummed-up AI “news.” I often use the tried-and-true Garbage In Garbage Out (GIGO). People seem to want to believe an actual SkyNet is just around the bend.
Take Globull Cooling, Globull Warming, Climate Change, Climate Crisis Hoaxers. If a real digital program were to obtain and analyze all of the climate data (real and concocted), it would recognize the Hoax, and confirm that we have short periods of warming in between loooong periods of much colder weather. If and when a sophisticated program could analyze climate data and conclude the obvious Hoax, the Drive-By Media would proclaim that the Climate AI is funded by the oil industry, and cannot be trusted.
SpaceX doesn’t use AI for engineering its rockets and satellites. Likely because most engineering is explicit and transparent while many AI processes are opaque and give fuzzy results. There can also be compliance issues that discourage/outrule its use.
For Musk’s specific use cases those large language models (part of the generative AI trend/hype) don’t provide value for now, but in general he recognizes their power, his company xAI works on them as well. And Tesla is very much an AI company. The (mostly autonomous) cars (e.g., computer vision, path planning) and humanoid robots (e.g., learning, task coordination) use AI a lot both in the products and for product development.
Here is more on Mr. Musk’s role at SpaceX, this time as voiced by NASA Administrator Nelson in an interview by NPR:
https://www.npr.org/2024/05/06/1249249941/nasa-bill-nelson-moon-artemis-china-starliner
The lack of understanding / insight of the NPR interviewer is breathtaking, and the most profound question that Mr. Detrow can raise is whether or not former Senator Nelson “trusts” Elon Musk. “Are you,” he asks, “concerned that so much of [the Artemis] plan is in the hands of Elon Musk at this point in time?” (Are we concerned that so much of America’s space program is in the hands of ignorant ideologues who despise this country and its culture?)
Dear God in Heaven, this is about what you might expect from the presstitutes at NPR, but one might have hoped for even the most feeble attempt at a “thank you” to Musk and SpaceX from Administrator Nelson for their help in making America’s journey back into space possible.
Voyager is tough because it is simple–which forced coders to be efficient
An amusing example of A.I. (if not itself a spoof) was to ask it about the last several Presidents—and it came up with this:
https://www.secretprojects.co.uk/attachments/img_0003-jpeg.780910/
Funny
https://techxplore.com/news/2025-08-fairness-tool-ai-bias-early.html
From the article: “Musk continued, saying that he’s been testing improved AI language models by asking them questions about space — and the results have been disappointing. ‘With any given variant of or improvements in AI, I mean, I’ll ask it questions about the Fermi paradox, about rocket engine design, about electrochemistry — and so far, the AI has been terrible at all those questions,’ Musk said.”
From Robert‘s post: “AI is still essentially garbage-in-garbage-out. The rush by businesses and tech-firms to blindly use has resulted in more than a few disasters.”
The terrible results obtained from Musk’s questions isn’t necessarily that garbage is going in. Musk did not test artificial intelligence with garbage, he tested it with real facts and known knowns. The reason why he didn’t get good results is that, like every computer and every previous AI, all these AI processes are still only curve fitting.
It is like getting a plot of data points and then fitting the data to a mathematical curve (straight line, polynomial, etc.). The data is real data, taken from real observations, but the curve is a mere model of the data. Too many people look at the new curve as though it were the actual data — they confuse the map for the territory.* Good scientists do this, too. They confuse their theories for reality, but the theory is only a map that we use in order to understand reality. AI does not give us new information, and it is not very creative in combining two ideas in order to make a new idea. Humans are good at that, however.
So all Musk did was ask questions in areas with relatively good data but he got back mediocrity.
AI is not completely bereft of creativity, however. It comes up with some interesting lies. A lot of the information that AI uses is garbage, so in a way, much of AI really is garbage in, garbage out. Except many people consider the output to be gospel.
With so many fields of science coming up with garbage results and conclusions, AI cannot distinguish the good from the bad. Neither can many humans. It takes effort to weed out all the bad so that we can work only with the good. There is a rapidly growing pool of bad science. This is one of the reasons that it is more important that our science be good, that it is done with good workmanship. Peer review was intended to weed out the poor workmanship, but that system has failed, in recent decades.
pzatchok got it right, and he used observation, real data, in order to draw his conclusion. Would AI have come to the same conclusion?
It isn’t only science that has produced bad conclusions. Somehow, we have people who believe that a man can be a woman, and vice versa. It is so bad that many people are unable to define what a woman is without using the word “woman.” No wonder AI is so stupid, but compared to some humans it is still brilliant.
AI is not completely useless, however. It can perform mindless jobs with ease. Driving cars, for example. When the task at hand is merely following a myriad of rules, then a computer is ideal, and if the rules get complicated or may be contradictory, then a flexible program on that computer is ideal.
_________________
* An example is climatology’s dependence upon datasets such as average annual temperature. The datasets are mere models based upon already manipulated data. They are far from reality and do not tell much of a tale of what goes on in the world.