Accelerating Enterprise Agentic Workflows with Informatica’s NVIDIA NIM Connector and Recipes
Last Published: Dec 10, 2025 |
Table Of Contents
Following our recent announcement of the Informatica and NVIDIA collaboration, we're excited to announce the general availability of the Informatica Intelligent Data Management Cloud (IDMC) NVIDIA NIM Connector. This powerful connector provides one-click access to GPU-optimized microservices and ready-to-use recipes for common AI agentic use cases.
Built on NVIDIA NIM, a family of containerized, enterprise-grade microservices, this integration makes high-performance AI a native part of the data workflows for data engineers and AI architects, and offers significant advantages including:
- Ease of Use: NIMs are pre-built with standard APIs, enabling quick deployment and radically simplifying AI integration.
- Reliability: Continuously managed and validated by NVIDIA, NIMs provide a stable, supported foundation for critical applications.
- Performance and Scale: Achieve low-latency, high-throughput inference.
- Portability: As cloud-native microservices, NIMs run on any NVIDIA-accelerated infrastructure—on-premises, in the cloud, or on a workstation.
Exposing these capabilities through a simple no-code connector enhances IDMC's AI-native integration fabric, bringing trusted data and powerful NVIDIA AI to customers easier than ever before.
The NVIDIA NIM Connector Inside IDMC
We have built a native NVIDIA NIM connector directly into Informatica's no-code agent orchestration within IDMC. This new connector enables users to seamlessly use any NVIDIA NIM in their agentic workflows, making it easier to run high-performance AI model inferencing. Here’s a look at what you can achieve with the connector:
- LLM Inference: Enhance your LLM with chat completion, a function calling for integrating external tools, and structured output for precise data formatting, among other features.
- Retrieval Models: Enhance your data retrieval and search capabilities with specialized models for creating embeddings and reranking results for better relevance.
- Visual and Multimodal AI: Tap into advanced models for image reasoning, OCR (Optical Character Recognition), and even image generation.
- Industry-Specific Solutions: Access specialized models for key industries like healthcare and climate simulation.
To get started, simply create a new App connection in IDMC, select "NVIDIA NIM" as your connection type, provide a name for your connection, and enter your API key. With these simple steps, you're ready to connect with NVIDIA NIM's services.Pre-Built Recipes for faster AI agent deployment

Figure 1. AI agent deployment
Informatica’s collaboration with NVIDIA extends far beyond just connectivity. We have already implemented two production-ready recipes/blueprints within the CAI Recipe Homepage, significantly cutting deployment time for AI solutions from weeks to mere minutes. This accelerates time-to-value for deploying AI agents. Rather than building from the ground up, enterprises can utilize established, industry-standard workflow patterns, shortening development cycles, lowering risk, and enabling quicker returns on investment for GenAI projects across customer support, marketing, operations, finance, and HR. We'll explore some ready-to-use recipes, that you can use as a starting point for building your own AI Agents.
| # | Recipe Name | Goal | Enterprise Application |
|---|---|---|---|
| 1 | AI Agent for Jira using NVIDIA NIM | Automate Jira workflows by using a natural-language AI agent. This recipe allows user to send queries from a Slack channel, process them with an NVIDIA NIM-powered AI agent, perform actions in Jira, and send the results back to Slack | It can be used to streamline a variety of enterprise workflows, including IT help desks, customer support, and project management. It also provides a foundational agentic framework to interact with other applications. |
| 2 | Simple RAG Consumption with NVIDIA NIM | Convert natural-language Slack questions into vector embeddings on NVIDIA NIM, search Pinecone, feed the top-K context to summarize answer using NVIDIA NIM Inference model, and post an answer back to Slack | It can be used in a variety of productivity-boosting bots across the enterprise, including IT and HR help desks, technical support, RFP assistance, as well as sales and marketing enablement. |
| 3 | Generate Image from Text with NVIDIA NIM | Allow a user to send a description in Slack, invoke a diffusion-based image model via NVIDIA NIM, and stream the resulting image file back to Slack | Enables enterprises in rapid, cost-effective creation of brand-consistent visuals that enhance marketing, product design, user experience, and operational efficiency. |
Putting the Recipes into Action
Step-By-Step Setup
- Copy Recipe
Go to Application Integration > Recipes, and search for NVIDIA to see all NVIDIA NIM recipes. Click on a recipe, select Use and choose a folder to copy recipe assets. - Publish and Run
Configure and publish the connections (NVIDIA NIM, Pinecone, Jira, Slack, etc), publish the subprocess and the main process - Create and run a Slack app
Create a new slack app, enable Event subscription, and notify new message events on the main process URL. Once done, you can add the app to any channel and start interacting with that app.

Figure 2. Recipe usage
Conclusion: A New Data-to-AI Flywheel
With this powerful collaboration, we're not just integrating two technologies; we're accelerating our shared mission to make trusted data and AI a tangible, transformative force for every enterprise. This is how we'll empower organizations to automate, optimize, and unlock their next great strategic opportunity. As NIM microservices expand, and as Informatica increases recipe coverage, enterprises gain a scalable, secure, and future-proof route from raw bytes to generative insight.
The journey from data to Agentic AI has never looked so achievable — or so exciting.