Space Force puts a halt to the use of AI because of security issues
The Space Force has decided to stop using any artificial intelligence computer tools (AI) because of the security risks that presently risk in using them.
The Sept. 29 memorandum, addressed to the Guardian Workforce, the term for Space Force members, pauses the use of any government data on web-based generative AI tools, which can create text, images or other media from simple prompts. The memo says they “are not authorized” for use on government systems unless specifically approved.
Chatbots and tools like OpenAI’s ChatGPT have exploded in popularity. They make use of language models that are trained on vast amounts of data to predict and generate new text. Such LLMs have given birth to an entire generation of AI tools that can, for example, search through troves of documents, pull out key details and present them as coherent reports in a variety of linguistic styles.
Generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force’s chief technology and innovation officer, said in the memo. But Costa also cited concerns over cybersecurity, data handling and procurement requirements, saying that the adoption of AI and LLMs needs to be “responsible.”
This decision appears very wise. The insane fad in the last year to quickly adopt and even rely on AI has more than baffled me. Why are we in a such rush to let a robot do our for thinking and creative work for us? Have we become so lazy and dependent on computers that we’d rather let them do everything?
It is always dangerous to jump on a fad, without thought. That the Space Force has realized this is excellent news.
The Space Force has decided to stop using any artificial intelligence computer tools (AI) because of the security risks that presently risk in using them.
The Sept. 29 memorandum, addressed to the Guardian Workforce, the term for Space Force members, pauses the use of any government data on web-based generative AI tools, which can create text, images or other media from simple prompts. The memo says they “are not authorized” for use on government systems unless specifically approved.
Chatbots and tools like OpenAI’s ChatGPT have exploded in popularity. They make use of language models that are trained on vast amounts of data to predict and generate new text. Such LLMs have given birth to an entire generation of AI tools that can, for example, search through troves of documents, pull out key details and present them as coherent reports in a variety of linguistic styles.
Generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force’s chief technology and innovation officer, said in the memo. But Costa also cited concerns over cybersecurity, data handling and procurement requirements, saying that the adoption of AI and LLMs needs to be “responsible.”
This decision appears very wise. The insane fad in the last year to quickly adopt and even rely on AI has more than baffled me. Why are we in a such rush to let a robot do our for thinking and creative work for us? Have we become so lazy and dependent on computers that we’d rather let them do everything?
It is always dangerous to jump on a fad, without thought. That the Space Force has realized this is excellent news.