Business Analytics Audit

In the final project phase or after accomplishing a Performance Management (PM) or Business Intelligence (BI) project it is important to assemble all lessons learned from within the different project phases and potentially transform them into a set of recommendations towards future projects. Next to these typical last project phase activities from an evaluation perspective, it is also advised to execute, be it on a periodic or ad-hoc basis, a PM or BI project review or project audit.

A project audit provides an opportunity to uncover issues, concerns and challenges which are encountered during the project lifecycle. The main goal of executing a project review is to identify the possible mistakes which have been made during the project or to identify where problems may occur if certain changes are not made to current processes. At the same time it allows to identify those activities that have been executed to satisfaction and that as such added to the success of the project, in order to be able to repeat them in future projects.

A project audit will -amongst other- review all the underneath topics by interviewing the key actors in the project and by reviewing the key material assembled during the project.

When we are starting up a project it is of utmost importance that a methodology suited for the job is used. Here the methodology should be tailored to supporting the specificities of a PM or BI project. Within PM or BI projects is also vital that, in order to keep the momentum and to keep the support of the business, the overall timeline of the project remains limited and in line with the original plan. Therefore typically the project should be split up into different project iterations in order to be able to satisfy business needs in a reasonable amount of time, and start delivering value and return on investment. Within the methodology, it should be clearly described what, when and by whom certain tasks and activities needs to be executed within the project.

This can only be accomplished by :

  • Splitting up the project into phases which can be clearly managed, assigned and tracked for progress.
  • Clearly attaching the right deliverables to each phase and its corresponding activities which should be incorporated via templates in function of consistency and simplicity. E.g. :
    • when should source analysis occur ?
    • what is the severity and amount of data quality issues we can expect ?
    • should it happen within the design phase, in the build phase or maybe ad-hoc ?
    • how do we need to manage this in terms of roles & responsibilities ?
    • how and where do we need to manage key design decisions?
    •  
  • Adding quality assurance gates for each phase and/or deliverables in the form of key criteria to decide to start a new phase.
    • E.g. which key criteria should be matched for a source-2-target design? What happens to the failed criteria?
  • Adding ownership for each phase and/or deliverable and the corresponding quality assurance gates.

Within a PM or BI methodology, sufficient time should be allocated to gather and safeguard all requirements. These requirements will be the basis for the PM or BI solution to be built. A distinction can be made between functional requirements, requirements which add value to the business; e.g. report xyz' and non-functional requirements, which define the characteristics, properties or qualities the solution must possess; e.g. a nightly constraint on time of only 3 hours which can be allocated to all ETL processes. In this example, this non-functional requirement might become very crucial towards the build and technical testing phases. Here it might be advisable, given the criticality to split up the build & technical test phases in smaller parts which can all be managed separately with the end-goal in mind of only having a limited overall amount of time available instead of tackling this as a whole after the build phase has been completed.

In terms of requirements it is also essential to define them in a SMART (S = Specific, M = Measurable, A = Attainable, R = Realistic, T = Timely) way, ideally leaving no room for interpretation. And finally next to the definition the requirements should also get a clear owner, versioning (are requirements disappearing and re-appearing ? How is change managed ? How are change request processed and tracked? ) and a corresponding priority.

In terms of project management, all people involved and their associated roles should be clearly defined.

  • How and when will the formal and informal communication between all actors take place ?
  • Is knowledge transfer between different teams and persons been properly taken care of ?
  • Have all applicable deliverables been approved at the right moment in time by the right people ?
  • Is the information distributed to the correct persons ?
  • Is version control and version history managed properly for documentation and end-products ?
  • Have people with the right skills been put on the correct activities ?
  • Which assumptions, constraints and exclusions have been defined and have they been respected ? Are project issues, data quality issues, risks properly managed ?

In terms of project planning,

  • Has enough time been foreseen for all defined activities ?
  • What about inter-dependencies between sub-phases or certain activities ?
  • Have certain tasks been omitted and why? Have all tasks a planned start and end date, an owner and people assigned to it ?
  • Is a booking system for actuals available? Have all persons been booking on the applicable phases and/or activities ?
  • Which are the outliers in terms of much less or much more time required and what are the reasons ?
  • Which tools are used for project management and document management?

The testing of a PM or BI solution is aimed at determining whether this solution satisfies the specified requirements, demonstrating that the solution is fit for purpose and detecting defects. It is important to start thinking about testing early on in the project. Apart from the actual execution of the tests and the related bug fixing, a lot of preparatory work needs to be done in order to ensure an efficient and effective testing of the solution.

Here are some examples of preparatory work that needs to be done:

  • Defining the overall scope (what will be tested and to which detail?) and approach for testing.
  • Defining the roles & responsibilities with respect to testing.
  • Defining the test levels (unit testing, integration testing, regression testing, performance testing (from back-end & front-end perspective), user acceptance testing, transition testing).
  • Creating the test scenarios (test scripts).
  • Defining the acceptance criteria that will be used to decide whether a test is successful or not.
  • Defining the test priorities when time for testing is reduced for certain reasons.
  • Defining the test environments; which test level should be applied to which test environment.
  • Defining the test planning.
  • Defining the test tool set for capturing the test scenarios and associated test results, defects and defects follow up. Definition of issues types & issues statuses.
  • Defining an overall test follow up method and/or tool.

All of the above topics are only a set of example topics which need to be verified when conducting a project audit in trying to answer the question whether the project was managed correctly or not, including documenting the "lessons learned and adopting processes, project & program management practices to the findings.

Contact us for more information on Performance Management & Business Intelligence Audit.