Space Force puts a halt to the use of AI because of security issues

The Space Force has decided to stop using any artificial intelligence computer tools (AI) because of the security risks that presently risk in using them.

The Sept. 29 memorandum, addressed to the Guardian Workforce, the term for Space Force members, pauses the use of any government data on web-based generative AI tools, which can create text, images or other media from simple prompts. The memo says they “are not authorized” for use on government systems unless specifically approved.

Chatbots and tools like OpenAI’s ChatGPT have exploded in popularity. They make use of language models that are trained on vast amounts of data to predict and generate new text. Such LLMs have given birth to an entire generation of AI tools that can, for example, search through troves of documents, pull out key details and present them as coherent reports in a variety of linguistic styles.

Generative AI “will undoubtedly revolutionize our workforce and enhance Guardian’s ability to operate at speed,” Lisa Costa, Space Force’s chief technology and innovation officer, said in the memo. But Costa also cited concerns over cybersecurity, data handling and procurement requirements, saying that the adoption of AI and LLMs needs to be “responsible.”

This decision appears very wise. The insane fad in the last year to quickly adopt and even rely on AI has more than baffled me. Why are we in a such rush to let a robot do our for thinking and creative work for us? Have we become so lazy and dependent on computers that we’d rather let them do everything?

It is always dangerous to jump on a fad, without thought. That the Space Force has realized this is excellent news.

Gunhill Road – I Got a Line in New York City

An evening pause: Something a bit different. As noted on the youtube webpage, the visuals here “were created by human artists tapping into the assistance of leading-edge generative AI.” Normally I find the fad to go to AI to do our thinking and creativity for us to be more than appalling, but in this case it is clear the artists guided the art, and then fitted it well to the music.

Hat tip Bob Robert.

Robots communicating in languages humans can’t understand

The rise of the machines! When two bots of its artificial intelligence software (AI) began to communicate in a language humans could not understand, Facebook researchers put a stop to it.

At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming. “There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two [robot] agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

…Facebook ultimately opted to require its negotiation bots to speak in plain old English. “Our interest was having bots who could talk to people,” says Mike Lewis, research scientist at FAIR. Facebook isn’t alone in that perspective. When I inquired to Microsoft about computer-to-computer languages, a spokesperson clarified that Microsoft was more interested in human-to-computer speech. Meanwhile, Google, Amazon, and Apple are all also focusing incredible energies on developing conversational personalities for human consumption. They’re the next wave of user interface, like the mouse and keyboard for the AI era.

The other issue, as Facebook admits, is that it has no way of truly understanding any divergent computer language. “It’s important to remember, there aren’t bilingual speakers of AI and human languages,” says Batra. We already don’t generally understand how complex AIs think because we can’t really see inside their thought process. Adding AI-to-AI conversations to this scenario would only make that problem worse.

The article makes some interesting points about the advantages of allowing this AI software to create its own language. For me, none of these arguments are very convincing.

South Korea commits almost a billion dollars to AI research

In reaction to the recent Go victory by a computer program over a human, the government of South Korea has quickly accelerated its plans to back research into the field of artificial intelligence with a commitment of $863 million and the establishment of public/private institute.

Scrambling to respond to the success of Google DeepMind’s world-beating Go program AlphaGo, South Korea announced on 17 March that it would invest $863 million (1 trillion won) in artificial-intelligence (AI) research over the next five years. It is not immediately clear whether the cash represents new funding, or had been previously allocated to AI efforts. But it does include the founding of a high-profile, public–private research centre with participation from several Korean conglomerates, including Samsung, LG Electronics and Hyundai Motor, as well as the technology firm Naver, based near Seoul.

The timing of the announcement indicates the impact in South Korea of AlphaGo, which two days earlier wrapped up a 4–1 victory over grandmaster Lee Sedol in an exhibition match in Seoul. The feat was hailed as a milestone for AI research. But it also shocked the Korean public, stoking widespread concern over the capabilities of AI, as well as a spate of newspaper headlines worrying that South Korea was falling behind in a crucial growth industry.

South Korean President Park Geun-hye has also announced the formation of a council that will provide recommendations to overhaul the nation’s research and development process to enhance productivity. In her 17 March speech, she emphasized that “artificial intelligence can be a blessing for human society” and called it “the fourth industrial revolution”. She added, “Above all, Korean society is ironically lucky, that thanks to the ‘AlphaGo shock’, we have learned the importance of AI before it is too late.”

Not surprisingly, some academics are complaining that the money is going to industry rather than the universities. For myself, I wonder if this crony capitalistic approach will produce any real development, or whether it will instead end up to be a pork-laden jobs program for South Korean politicians.

Computer program learns and then wins at Go

A computer program, dubbed AlphaGo, has successfully beaten a professional player of Go for the first time.

What is significant however is the method used by that computer program to win:

The IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. But AlphaGo was not preprogrammed to play Go: rather, it learned using a general-purpose algorithm that allowed it to interpret the game’s patterns, in a similar way to how a DeepMind program learned to play 49 different arcade games2.

This means that similar techniques could be applied to other AI domains that require recognition of complex patterns, long-term planning and decision-making, says Hassabis. “A lot of the things we’re trying to do in the world come under that rubric.” Examples are using medical images to make diagnoses or treatment plans, and improving climate-change models.

If computer programs are now successfully able to learn and adapt it means that it will become increasingly difficult to distinguish between those programs and actual humans.