Both industry research and real world experience show that 80% of the work in a big data project involves data integration and data quality. Informatica software includes the broadest set of data integration and data quality capabilities available on Hadoop, for fivefold productivity gains that transform more data into more accurate, insightful analysis in less time.

Informatica Big Data Edition

Powered by the Vibe™ virtual data machine, Informatica Big Data Edition delivers up to five times more productivity by allowing your developers to integrate almost any type of data at any scale — without having to learn Hadoop.

Learn More

Key Features

  • A visual development environment dramatically increases productivity by eliminating hand coding
  • Hundreds of high-speed connectors and pre-built transformations integrate and cleanse all types of data
  • More than 100,000 trained Informatica developers worldwide simplify big data project staffing
  • Informatica's “Map Once, Deploy Anywhere” Vibe™ technology ensures that new data types and technologies don't slow you down

Big Data Parser

An easy-to-use codeless data parsing transformation environment, Big Data Parser is optimized to process any file format natively on Hadoop with scale and efficiency.

Learn More

Key Features

  • Pre-built parsers address a wide range of data sources, including logs, industry standards, documents, and binary or hierarchical data
  • A visual development environment for creating custom parsers eliminates cumbersome parsing logic development and testing

Vibe Data Stream for Machine Data

Based on Informatica’s established high-performance messaging technology, Informatica Vibe Data Stream provides highly available, reliable, real-time streaming data collection for big data analytics, operational intelligence, and traditional enterprise data warehousing.

Learn More

Key Features

  • Real-time data collection works at high volume and high speed across a wide variety of data sources, over both local and wide area networks
  • Adaptable architecture enables one-to-one, one-to-many, many-to-one, and many-to-many connections
  • Vibe delivers directly to targets for either stream or batch processing

MDM

Informatica MDM enriches master data with big data details such as social insight, mobile geo-location information, and real-time transaction signals. The resulting multi-domain view of customers, products, and relationships improves operations and deepens customer understanding.

Learn More

Key Features

  • Flexible multi-domain data model adapts to your unique business requirements
  • Reusable business rules accelerate and streamline MDM, data integration, and data quality projects
  • Multi-domain approach expands beyond a single data domain, use case, or region to increase agility and accommodate both current and future business needs

Data Masking

Informatica Data Masking delivers policy-based data security for applications running on Hadoop and other Big Data platforms, minimizing the risks of exposing sensitive data as it's stored and analyzed in Big Data projects and ensuring compliance with data privacy mandates and regulations.

Learn More

Key Features

  • Dynamic Data Masking policies require no customization or coding to protect data within Hadoop and other Big Data platforms
  • Existing access control policies govern data masking rules for sensitive data elements
  • Authorization policies limit unmasked data access to privileged users
  • Persistent Data Masking rules protect sensitive data to reduce risk of data breaches in nonproduction environments

Data Archive

Informatica Data Archive is highly scalable, full-featured smart partitioning and data archiving software. It archives inactive data and legacy applications in a compressed but readily accessible form, significantly improving performance and lowering risk while cost-effectively managing data growth in a range of enterprise business applications.

Learn More

Key Features

  • Smart partitioning to significantly improve application performance
  • Database and unstructured data archiving to improve maintenance efficiencies and lower costs
  • Fine-grain data restoration to production environments
  • Pre-built accelerators to speed implementations for popular packaged applications
  • Integrated development environment for custom and/or modified rules
  • Secure, highly compressed, immutable archive file to support compliance, retention, and access requirements