Recent Advances in AI: Internet-Connected Coding, Developer Assistants, and Enterprise Innovations

Artificial intelligence continues to evolve rapidly, with recent announcements highlighting significant advancements in coding assistants, enterprise AI adoption, and specialized AI tools for developers and data professionals. This past week brought several notable updates from major players in the AI space, ranging from enhanced model capabilities to new certification programs and innovation labs.

OpenAI Codex Gains Internet Access and Broader Availability

OpenAI has significantly upgraded its coding agent, Codex, by enabling internet access during task execution. This new capability unlocks a range of possibilities for developers using the tool.

With internet connectivity, Codex can now perform actions that require fetching external information or interacting with online resources. These include:

  • Installing necessary dependencies directly.
  • Running tests that rely on external services or data.
  • Upgrading or installing software packages as needed.

While powerful, internet access is not the default setting for Codex. Users must explicitly enable it when creating a new environment or by editing an existing one. Granular control is provided, allowing users to specify which domains and HTTP methods Codex is permitted to use.

In addition to the technical enhancements, OpenAI has begun rolling out Codex access to ChatGPT Plus subscribers. While this expands the reach of the tool, OpenAI has indicated that rate limits might be applied to Plus users during periods of high demand.

Mistral AI Unveils New Coding Assistant for Enterprises

Mistral AI has introduced its new offering tailored for developers, Mistral Code. This coding assistant is built upon the open-source project Continue, leveraging its foundation of models, rules, prompts, and documentation designed for AI code assistance.

Mistral Code is powered by a suite of four distinct coding models:

  • Codestral
  • Codestral Embed
  • Devstral
  • Mistral Medium

The assistant demonstrates proficiency across more than 80 programming languages. Its capabilities extend beyond simple code generation, allowing it to reason over various development artifacts such as files, Git differences, terminal outputs, and issue trackers.

Mistral Code is currently accessible through a private beta program, supporting popular integrated development environments (IDEs) like JetBrains products and VS Code.

The company has articulated a clear vision for Mistral Code, emphasizing its role in delivering “best-in-class coding models to enterprise developers.” The platform aims to support a wide spectrum of tasks, from providing instant code completions to facilitating complex multi-step refactoring processes. A key differentiator highlighted is the integrated platform approach, offering flexible deployment options including cloud, reserved capacity, or air-gapped on-premises graphics processing units (GPUs). Unlike some conventional software-as-a-service (SaaS) copilots, Mistral emphasizes that all components of the stack, from models to code, are provided by a single vendor, ensuring a unified experience under a single set of Service Level Agreements (SLAs) and maintaining code within the customer’s enterprise boundary.

Postman Integrates AI Agents into API Development Workflow

Postman, a leading platform for API development, has introduced Agent Mode, designed to embed the power of AI agents directly into its core functionalities. These agents are intended to act as intelligent assistants within the Postman environment, streamlining various API-related tasks.

The capabilities of these newly integrated AI agents include:

  • Creating, organizing, and updating API collections.
  • Generating comprehensive test cases for APIs.
  • Producing documentation automatically.
  • Building multi-step agents to automate repetitive API workflows.
  • Setting up and configuring API monitoring and observability.

According to Abhinav Asthana, CEO and co-founder of Postman, the integration is akin to having a highly knowledgeable Postman expert working alongside the user.

Beyond the Agent Mode, Postman has also enhanced its platform’s connectivity. Users can now transform any public API available on the Postman network into an MCP (Multi-Cloud Platform) server. To support this, the company has launched a new network specifically for MCP servers. This network serves as a centralized discovery hub, allowing publishers to host tools and services for agents, making them easily findable by developers. Mr. Asthana noted that existing remote MCP servers have been verified and integrated into this public network.

FinOps Foundation Launches Certification for Managing AI Costs

Recognizing the growing financial implications of artificial intelligence deployments, the FinOps Foundation has rolled out a new training and certification program: FinOps for AI. This initiative is specifically designed to equip FinOps practitioners with the knowledge and skills required to manage and optimize the cloud spend associated with AI workloads effectively.

The certification curriculum is structured to cover essential topics at the intersection of FinOps and AI, including:

  • Specific methods for allocating costs related to AI resources.
  • Models for charging back AI expenses within an organization.
  • Strategies for optimizing AI workload performance and cost-efficiency.
  • Analysis of unit economics for AI services.
  • Considerations for the sustainability of AI infrastructure.

The educational content is presented as a four-part series. The first part is currently available, with subsequent parts scheduled for release in September 2025, November 2025, and January 2026. The comprehensive certification exam is planned to be ready for candidates in March of the following year.

Microsoft Dev Proxy Update Enhances LLM Tracking

The latest version of Microsoft’s developer tool, Dev Proxy 0.28, includes significant updates focused on providing better visibility and control over Large Language Model (LLM) interactions, particularly with OpenAI services.

A key addition in this release is the OpenAITelemetryPlugin. This plugin offers developers enhanced insight into how their applications interact with OpenAI’s APIs. For every request made, the plugin provides detailed information, including:

  • The specific OpenAI model utilized.
  • The total token count for the request and response.
  • An estimated cost associated with the interaction.
  • Grouped summaries of usage and costs per model.

This telemetry helps developers understand and manage the expenses and resource consumption of their AI integrations.

Furthermore, Dev Proxy 0.28 now supports using Foundry Local, a local AI runtime stack, as a provider for local language models. This allows developers to test and work with LLMs offline or within controlled local environments.

Other notable improvements in this version include:

  • New extensions specifically for .NET Aspire development.
  • Enhanced capabilities for generating PATCH operations when working with TypeSpec.
  • Added support for JSONC format in mock files, offering more flexibility for configuration.
  • General improvements to the logging system for better debugging and monitoring.

The updates in Dev Proxy 0.28 aim to give developers more control, visibility, and flexibility when building applications that integrate with LLMs.

Snowflake Introduces Agentic AI for Advanced Data Insights

Snowflake is introducing innovative AI capabilities built around the concept of agentic AI to enhance data analysis and insight generation. These features are designed to make interacting with data more intuitive and automate complex analytical tasks.

One of the key upcoming features is Snowflake Intelligence, which is expected to be available in public preview soon. This service utilizes intelligent data agents to provide a natural language interface for querying data. Users can simply ask questions, and the system will work to deliver actionable insights derived from both structured and unstructured data sources.

Another capability slated for private preview soon is a new Data Science Agent. This agent is specifically tailored for data scientists, aiming to automate many routine tasks involved in developing machine learning models, thereby streamlining the ML workflow.

Snowflake Intelligence is designed to unify data from a multitude of sources. It leverages a new feature called Snowflake Openflow to simultaneously process information from diverse formats like spreadsheets, documents, images, and traditional databases. The embedded data agents are capable of generating visualizations from the analyzed data and helping users translate insights into concrete actions. Additionally, Snowflake Intelligence will be able to access external knowledge bases through Cortex Knowledge Extensions, which are soon to reach general availability on the Snowflake Marketplace.

These innovations reflect Snowflake’s focus on enabling users to gain deeper insights from their data more efficiently using advanced AI techniques.

Progress Software Enhances UI Libraries with New AI Code Assistants

Progress Software has incorporated new AI code assistants and expanded capabilities into the Q2 2025 releases of its popular UI libraries, Progress Telerik and Progress Kendo UI. These libraries are widely used for building modern applications with .NET and JavaScript frameworks.

The update introduces dedicated AI Coding Assistants for the Telerik UI for Blazor and KendoReact libraries. These assistants are designed to integrate with various AI-powered IDEs and automatically generate code snippets. This automation aims to reduce the need for manual coding and editing, leading to shorter development cycles.

Developers can also now utilize AI-driven theme generation. By using natural language prompts within Progress ThemeBuilder, developers can create custom styles and themes for both Telerik and Kendo UI components, simplifying the process of achieving a consistent look and feel.

The Q2 2025 releases further bring GenAI-powered reporting insights into Progress Telerik Reporting. This feature leverages generative AI to provide summaries and insights from reports. Additionally, a new GenAI-powered PDF processing library is included. This library offers capabilities such as:

  • Providing instant insights from documents.
  • Adding AI prompt options directly within the Editor control.
  • Offering new AI building blocks and page templates to accelerate UI development.

These additions underscore Progress Software’s commitment to integrating AI into its development tools to boost developer productivity and streamline complex tasks like UI customization and report analysis.

IBM Establishes Watsonx AI Labs in New York City

IBM has launched watsonx AI Labs, positioning it as an innovation hub located in New York City. The primary goal of this lab is to connect AI developers with IBM’s extensive resources, tools, and expertise.

New York City was chosen as the location due to its vibrant tech ecosystem, which includes over 2,000 AI startups. IBM intends for the lab to not only support these emerging companies but also foster collaborations with local universities and research institutions, contributing to the city’s AI landscape.

Ritika Gunnar, general manager of data and AI at IBM, emphasized that watsonx AI Labs is designed to be more than a conventional corporate research facility. Its mission is to provide top AI developers with access to world-class engineers and resources, empowering them to build new businesses and applications specifically aimed at reshaping AI for enterprise use cases. The investment in New York City is seen as a strategic move to tap into a diverse and talented pool and engage with a community known for driving technological innovation.

These developments collectively showcase the rapid pace of innovation in the artificial intelligence sector, bringing new capabilities to developers, enhancing enterprise workflows, and creating dedicated spaces for AI research and collaboration.