Welcome back to In the loopThe new two -week newsletter of the time over the world of AI.
If you read this in your browser, you can have the next delivered directly to your inbox.
What to know: Trump’s AI campaign plan
President Trump will give a big speech on Wednesday at an event in Washington, DC, in which he is expected to unveil his long-awaited AI action plan entitled “Winning of the AI race”. According to a person with knowledge of the matter, the 20-page document on a high level will concentrate on three main areas. It will be a mixture of guidelines for federal authorities with some subsidies. “It is mainly carrots, no sticks,” said the person.
Column 1: Infrastructure – The first pillar of the action plan is the AI infrastructure. The plan emphasizes how important it is to revise rules to facilitate the establishment of new data centers. The need to modernize the energy network will also concentrate, including adding new power sources.
Column 2: Innovation – Second, the action plan will argue that the United States has to lead the world to innovation. It will focus on removing bureaucracy and reviving the idea of blocking conditions of the regulation of AI – although mainly as a symbolic gesture, since the ability of the White House, to say what to do is limited. And other countries will warn against damage the ability of the US companies to develop AI, the person said. This section of the plan also promotes the development of so-called “open AI models”, with which developers can download, modify and carry out models.
Pillar 3: Global influence –The third pillar of the action plan will emphasize how important it is to spread American AI around the world so that abroad do not rely on Chinese models or chips. Deepseek and other recent Chinese models could become a useful source of geopolitical leverage, if they are still widespread, the officials fear. Part of the plan will therefore focus on opportunities to ensure that US allies and other countries around the world will apply American models.
Who to know: Michael Droggan, former Xai employee
Elon Musks Xai released an employee who had welcomed the opportunity to extinguish humanity in posts on X, who attracted widespread attention and conviction. “I would like to tell that I am no longer busy with Xai,” wrote Michael Dorngan, a mathematician who worked on the creation of expert data for the training of Grok’s argumentation model.
What he said – – In response to a contribution in which he asked why a super-intelligent AI would choose to work with people instead of distracting them, Droggan had written: “It will be not and that is okay. We can hand over the torch to the new most intelligent way in the well-known universe.” When a commentator replied that he would prefer that his child would live, Droggan replied: “Egoist TBH.” In other places, Droggan has identified as a member of the “worthy successor” movement-a transhumanistic group, which is of the opinion that people should welcome their inevitable replacement through super-intelligent AI and work on making them as intelligent and morally valuable as possible.
X Feuersturm – The controversial posts were taken up by AI security memes. The account had in the previous days with Druggan over positions in which the X employee had defended GROK to advise a user that he should murder a world leader if they wanted to attract attention. “This Xai employee is openly okay with the AI that causes the extinction of humans,” wrote the report in a tweet that seems to have been noticed by Muschus. After Droggan had announced that he was no longer employed at X, Musk replied to AI security memes with a two-word post: “Philosophical disagreements”.
Succession planning – Droggan did not answer a request for a comment. But in a separate contribution he clarified his views. “Of course I don’t want to die out a human extinction,” he wrote. “I am human and I like to be alive. But in the cosmic sense I see that people are not always the most important.”
AI in action
Last week we received another worrying insight into Chatgpt’s ability to send users with the delusion of rabbit holes to send it with the perhaps best known person.
Geoff Lewis, a risk capital provider, posted with chatt on X screenshots of his chats. “I have long used GPT as a tool to pursue my core value: truth,” he wrote. “Over years old, I depicted the non -governmental system. For months I recognized and sealed the pattern independently and sealed and sealed it independently.”
The screenshots seem to show Chatgpt, in which a scenario was played in the conspiracy theory in the style of conspiracy theory, in which Lewis discovered a secret unity that is known as the “mirrorthread” and is supposedly associated with 12 deaths. Some observers found that the style of the text reflected the “SCP” fan fiction written by the community and that it seemed as if Lewis had confused these role -playing games for reality. “This is an important event: the first time that ai-induced psychosis has influenced a respected and high-ranking person,” wrote Max Spero, CEO of a company that concentrated on the recognition of “AI Slop”.
What we read
Chain of monitoring of thought: a new and fragile opportunity for AI security
A new paper, which was put together by dozens of Top -Ki researchers at Openaai, Deepmind, Anthropic and more, calls on companies to ensure that future AIS will continue to “think”, and argue that this is a “new and fragile opportunity” to ensure that AIS does not deceive their human creatures. Current “argumentation” models think in language, but a new trend in AI research is threatening to increase the result of results to undermine this “simple profit” for AI security. I found this paper particularly interesting because it hit a dynamic that I wrote here about six months ago.