Space Force puts a halt to the use of AI because of security issues
The Space Force has decided to stop using any artificial intelligence computer tools (AI) because of the security risks that presently risk in using them.
The Sept. 29 memorandum, addressed to the Guardian Workforce, the term for Space Force members, pauses the use of any government data on web-based generative AI tools, which can create text, images or other media from simple prompts. The memo says they “are not authorized” for use on government systems unless specifically approved.
Chatbots and tools like OpenAI’s ChatGPT have exploded in popularity. They make use of language models that are trained on vast amounts of data to predict and generate new text. Such LLMs have given birth to an entire generation of AI tools that can, for example, search through troves of documents, pull out key details and present them as coherent reports in a variety of linguistic styles.
Generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force’s chief technology and innovation officer, said in the memo. But Costa also cited concerns over cybersecurity, data handling and procurement requirements, saying that the adoption of AI and LLMs needs to be “responsible.”
This decision appears very wise. The insane fad in the last year to quickly adopt and even rely on AI has more than baffled me. Why are we in a such rush to let a robot do our for thinking and creative work for us? Have we become so lazy and dependent on computers that we’d rather let them do everything?
It is always dangerous to jump on a fad, without thought. That the Space Force has realized this is excellent news.
On Christmas Eve 1968 three Americans became the first humans to visit another world. What they did to celebrate was unexpected and profound, and will be remembered throughout all human history. Genesis: the Story of Apollo 8, Robert Zimmerman's classic history of humanity's first journey to another world, tells that story, and it is now available as both an ebook and an audiobook, both with a foreword by Valerie Anders and a new introduction by Robert Zimmerman.
The print edition can be purchased at Amazon. from any other book seller, or direct from my ebook publisher, ebookit.
The ebook is available everywhere for $5.99 (before discount) at amazon, or direct from my ebook publisher, ebookit. If you buy it from ebookit you don't support the big tech companies and the author gets a bigger cut much sooner.
The audiobook is also available at all these vendors, and is also free with a 30-day trial membership to Audible.
"Not simply about one mission, [Genesis] is also the history of America's quest for the moon... Zimmerman has done a masterful job of tying disparate events together into a solid account of one of America's greatest human triumphs."--San Antonio Express-News
The Space Force has decided to stop using any artificial intelligence computer tools (AI) because of the security risks that presently risk in using them.
The Sept. 29 memorandum, addressed to the Guardian Workforce, the term for Space Force members, pauses the use of any government data on web-based generative AI tools, which can create text, images or other media from simple prompts. The memo says they “are not authorized” for use on government systems unless specifically approved.
Chatbots and tools like OpenAI’s ChatGPT have exploded in popularity. They make use of language models that are trained on vast amounts of data to predict and generate new text. Such LLMs have given birth to an entire generation of AI tools that can, for example, search through troves of documents, pull out key details and present them as coherent reports in a variety of linguistic styles.
Generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force’s chief technology and innovation officer, said in the memo. But Costa also cited concerns over cybersecurity, data handling and procurement requirements, saying that the adoption of AI and LLMs needs to be “responsible.”
This decision appears very wise. The insane fad in the last year to quickly adopt and even rely on AI has more than baffled me. Why are we in a such rush to let a robot do our for thinking and creative work for us? Have we become so lazy and dependent on computers that we’d rather let them do everything?
It is always dangerous to jump on a fad, without thought. That the Space Force has realized this is excellent news.
On Christmas Eve 1968 three Americans became the first humans to visit another world. What they did to celebrate was unexpected and profound, and will be remembered throughout all human history. Genesis: the Story of Apollo 8, Robert Zimmerman's classic history of humanity's first journey to another world, tells that story, and it is now available as both an ebook and an audiobook, both with a foreword by Valerie Anders and a new introduction by Robert Zimmerman.
The print edition can be purchased at Amazon. from any other book seller, or direct from my ebook publisher, ebookit. The ebook is available everywhere for $5.99 (before discount) at amazon, or direct from my ebook publisher, ebookit. If you buy it from ebookit you don't support the big tech companies and the author gets a bigger cut much sooner.
The audiobook is also available at all these vendors, and is also free with a 30-day trial membership to Audible.
"Not simply about one mission, [Genesis] is also the history of America's quest for the moon... Zimmerman has done a masterful job of tying disparate events together into a solid account of one of America's greatest human triumphs."--San Antonio Express-News
Minor edit in penultimate paragraph: “and even rely on”
AI is a fad like you said.
To me its nothing more than clever programing. It has not shown as much intelligence as a monkey to me yet.
It doesn’t extrapolate its own conclusions yet. It just does what it was programed to do.
If it was really intelligent like they imply why it it not looking for the answers to the questions of science?
So far all it is is a clever search engine interface.
Andi: Fixed. Thank you again.
“They make use of language models that are trained on vast amounts of data to predict and generate new text.”
AI. It’s just like people!
“Have we become so lazy and dependent on computers that we’d rather let them do everything?”
In a word, yes.
I understand the stop of use of AI, but I hope that does not mean that they will stop studying it.
AI is an interesting new tool with great promise, but not a panacea as many have proclaimed. It relies upon the provision of vast troves of “Training” data to base its findings upon, and therein lies its achilles heel for as with any computer system it is highly vulnerable to GIGO (Garbage In/Garbage Out) because there is really no “reasoning” involved. It simply reflects what can best be termed as the “consensus” opinion of the data it is trained with. And as we all know from Covid and CAGW the consensus is often based on politics and theology, not actual DATA.
When well trained AI can be quite useful. The best example I have heard is its ability to generate computer code to address well defined problems, and I have a friend who has personally exploited this ability in his tech business. Of course you cannot blindly trust it and need to be diligent in quality assurance of the final product, but it has proven to be quite useful and will only get better.
But YOU HAVE TO CAREFULLY CURATE AND MANAGE what you use to train the system, or you will only be asking for problems. No different than our school system really where we have all seen the impact of progressive material into the curricula and the adoption of “no answer is really wrong” even for mathemstics. So there is no doubt some AI will be deployed that is intentionally perverted yet presented as authoritative by the likes at MSNBC (again it is a tool, and this usage fits their intended purpose). So the challenge will be to identify these for what they are and to support and promote well curated platforms. instead And I wouldn’t be at all surprised if many of the tech companies promoting these platforms have bifurcated systems with a “well trained” model strictly for in house use and a far less reliable one subject to progressive persuasion for public consumption. But then I am a cynic.