To succeed at big data you must be able to process large volumes of data, data that is very often unstructured. More importantly, you must be able to swiftly react to emerging opportunities and insights before your competitor does. A Disciplined Agile approach to big data is evolutionary and collaborative in nature, leveraging proven strategies from the traditional, lean, and agile canons. Collaborative strategies increase both the velocity and quality of work performed while reducing overhead. Evolutionary strategies – those that deliver incremental value through iterative application of architecture and design modeling, database refactoring, automated regression testing, continuous integration (CI) of data assets, continuous deployment (CD) of data assets, and configuration management – build a solid data foundation that will stand the test of time. In effect this is the application of proven, leading-edge software engineering practices to big data.
- Ambler, S. W. (2002). Agile modeling: Effective practices for extreme programming and the unified process. New York: Wiley.Google Scholar
- Ambler, S. W. (2013). Database testing: How to regression test a relational database. Retrieved from http://www.agiledata.org/essays/databaseTesting.html.
- Ambler, S. W., & Lines, M. (2012). Disciplined agile delivery: A practitioner’s guide to agile software delivery in the enterprise. New York: IBM Press.Google Scholar
- Ambler, S. W., & Sadalage, P. J. (2006). Refactoring databases: Evolutionary database design. Boston: Addison Wesley.Google Scholar
- Guernsey, M., III. (2013). Test-driven database development: Unlocking agility. Upper Saddle River: Addison-Wesley Professional.Google Scholar
- Lindstedt, D., & Olschimke, M. (2015). Building a scalable data warehouse with database 2.0. Waltham: Morgan Kaufman.Google Scholar
- Sadalage, P. J. (2003). Recipes for continuous database integration: Evolutionary database development. Upper Saddle River: Addison-Wesley Professional.Google Scholar