Eric Hills

Lack of adoption and poor data quality are two of the most common reasons that front-office software projects fail. Interestingly, adoption itself is tied to data quality — users reject systems running on unreliable or incomplete data. Even when mediocre data is good enough to achieve some level of adoption and impact, project ROI is reduced to a fraction of the original business case.

So, it’s no exaggeration to say that the benefits of analytical applications and platforms are constrained by the data supplied to them. Sales, marketing, pricing and CPQ apps are especially data-driven, and thus live or die based on the quality and quantity of input and system-generated data.

Unfortunately, many software deployments start slow or languish because enterprise data is not up to the job. Data is never readily available in the format and completeness required, so someone needs to integrate and prepare per the target application’s data model. Triaging the data usually takes much longer than expected, extending project timelines and creating delivery risk. Delays are common because ad hoc data preparation skills and detailed understanding of the data and the spec are lacking in most companies.

Data challenges persist even after applications go live. The initial prep work needs to be operationalized sufficiently so that it doesn’t require excessive manual effort to run updates or deal with glitches. Users rightly expect data quality to get better over time, but too often the reliability degrades as project resources roll off and ongoing anomalies pile up.

Do these challenges sound familiar? Good news — there is an answer