• OpenAI Robotics Leader Resigns, Says Ethical ‘Lines’ Were Crossed in Pentagon Deal

    A senior hardware leader at OpenAI has stepped down, saying the company moved too quickly on a controversial deal involving artificial intelligence and the US military.Caitlin Kalinowski, the high-profile executive hired to lead OpenAI’s hardware and robotics charge, announced her resignation this past Saturday. Her departure is a direct response to the company’s recent deal to put its AI models into the Pentagon’s classified networks, a move she suggests happened far too quick
  • Microsoft Debuts Copilot Cowork, Bringing Claude Tech Into Office Workflows

    Microsoft wants Copilot to do more than generate answers. The tech giant has unveiled agentic Copilot Cowork, a new Microsoft 365 capability that uses Claude technology to execute workplace tasks and automate multi-step office workflows.Designed to turn simple requests into coordinated action across tools like Outlook, Teams, and Excel, the system lets users hand off work while Copilot builds the plan and carries it through in the background.Turning intent into actionWhen a task is handed off, C
  • AI Chatbots Point Users to Illegal Gambling Sites, Investigation Finds

    Ask an AI chatbot about online gambling, and it may do more than answer the question. A new investigation found that some of the biggest AI platforms could be pushed to recommend illegal casinos and even suggest ways around gambling safeguards.
    That turns a familiar AI safety debate into something more concrete. The problem is not just that chatbots can surface risky information. It is that they may also help users get around protections meant to limit gambling harm.
    When guardrails give way
    The
  • AI War Propaganda? Viral Images of Captured US Soldiers in Iran Exposed as Fake

    You won’t believe your eyes, but in the age of generative AI, maybe you shouldn’t.A set of viral images circulating on social media appeared to show American soldiers captured by Iran, which would be a dramatic development in the escalating conflict between the United States and Iran.The post quickly began spreading across platforms like X and Facebook on March 5, with captions claiming “U.S. Delta Force troops” had been taken into custody. The images were shared in multi
  • Advertisement

  • OpenAI Launches Codex Security to Find, Patch Code Vulnerabilities

    OpenAI has unveiled a new AI agent designed to find security flaws and vulnerabilities in enterprise IT systems, positioning it as a competitor to Anthropic’s Claude Code Security launched last month.Codex Security can scan a company’s IT system, identify flaws, provide a list of solutions, and then fix the vulnerabilities. OpenAI said the agent will aim to provide easy-to-apply patches, with an emphasis on avoiding creating more work than a cybersecurity firm would normally require.
  • Gemini Beats Claude, GPT in Google’s First Android AI Coding Benchmark

    Google just published its first Android Bench leaderboard, ranking the AI models that perform best at coding Android apps. Nine models made the list, all of which included tools from Google Gemini, Anthropic Claude, and OpenAI. Not surprisingly, Gemini 3.1 Pro Preview led the benchmark with a 72.4% score, followed by Claude Opus 4.6 and GPT-5.2 Codex. Google created the benchmark to measure how well AI systems solve real Android development problems using tasks drawn from several GitHu
  • 12 AI Prompt Templates Every Professional Should Bookmark

    AI is only as useful as the instructions you give it. The difference between a generic response and a powerful one often comes down to the prompt.Across industries, from marketing and software development to research, business strategy, and customer service, professionals are discovering that well-structured prompts can turn AI into a practical assistant rather than a novelty tool. Instead of collecting dozens of random prompts, what most people really need are a few reliable templates they can
  • Grammarly’s AI Expert Review Sparks Backlash Over Consent

    Grammarly’s latest AI feature has landed in uncomfortable territory. Its “Expert Review” tool offers writing feedback framed through the voices of real authors, journalists, and academics, including people who never agreed to participate and at least one scholar who recently died.
    That is what makes this more than a routine AI product dispute. Grammarly is not just summarizing publicly available writing or suggesting edits in a generic voice. It is packaging those suggestions a
  • Advertisement

Follow @R_and_D_News on Twitter!