Take agility to the next level by using data virtualization for prototyping

Shed the waterfall model and reap the benefits of prototyping: earlier feedback, tighter collaboration, and more accurate results.


Prototyping builds confidence in the process. It creates a culture of collaboration and alignment between the business and IT. But it has taken a couple of decades to get to this breakthrough.

Consider this common scenario in your business: Marketing wants a single view of the customer and they demand it immediately. Your integration project gets bogged down, however, thanks to shifting schedules, changing priorities, and feedback loops. Business requirements become more complex, putting existing processes at risk. Business leaders become disheartened by slow progress and inaccurate data. Determined to sidestep lengthy delays, they roll out their own business intelligence tools, which add an additional layer of dysfunction. Does agile development even have a chance?

Any competitive advantage from big data, or even internal data, faces endemic challenges:

  • Business requirements are so complex that numerous feedback cycles are required to get them right.
  • Requirements change during the long development process.
  • Stakeholders do not get involved early enough in the process, so their opinions are not taken into consideration.

Data virtualization to the rescue

Smart businesses have found that data virtualization solves many of these problems by prototyping data warehouse changes and additions. Originally developed as a next-generation architecture to solve point-to-point integration issues, data virtualization is now used to validate business requirements by showing what the end result will actually look like. Most importantly, it helps shorten the feedback loop and results in more accurate feedback.

Prototyping builds confidence in the process. The business trusts that the resulting data will be accurate, and IT ensures that business requirements are reasonable before development even begins. As a result, it creates a culture of collaboration and alignment between the business and IT. But it has taken a couple of decades to get to this breakthrough.

The long road to shortening the feedback loop

Over the past 20 years, a variety of companies have tried, and failed, to address effectively the challenges of prototyping. Many have sought to develop architectures by creating a data abstraction layer to solve point-to-point integration issues.

Enterprise Information Integration (EII) was one such development. It allowed an organization to collect data from disparate sources in a virtual database so it could be used for business intelligence. Subsequently, data federation, a form of data virtualization, became popular in 2012. This technology offered business users the ability to access and view data in an easy-to-understand format.

Today’s data virtualization lets you apply both data integration and data federation within a single environment. It allows you to reuse virtual views instantly, for any project or application. And its role-based tools enable self-service and business/IT collaboration.

In the scenario described at the beginning of the article, business users are left extracting meaning from outdated and inaccurate data. Instead of benefitting from data, they make ill-informed decisions that result in lower productivity and higher costs. In the second, IT uses data virtualization to prototype requirements and validate data early in the project lifecycle. The end result: more agile development and more effective business intelligence.

See how HealthNow used data virtualization and prototyping to keep up with the business transformation required by government regulations in this case study.

Related content


Get 3 steps closer to big data gold

NoSQL will not replace all your tools, but it will be a valuable addition to your toolbox.


Build on innovation and avoid reinvention

How can developers reuse objects if they don’t know they exist in the first place? Try employing a “write once, deploy anywhere” methodology.


An old-school technique for new-school big data

Use this obscure technique to realize the benefits of big data while avoiding flat file bottlenecks.