You are here
Introducing Cognos Analytics 11 - Benefits & Features
This is part one of a series discussing Cognos Analytics 11. Part 1 will take a closer look at the interface, data modules and the new dashboarding module. Part two will discuss report authoring, installation and migration/upgrading.
Cognos Analytics 11 – code named Titan – is the latest major release of the IBM Business Analytics software suite and can be downloaded since year end 2015. It is the successor of Cognos Business Intelligence 10 suite that was released in 2010. Some speculation existed that IBM would merge this product in another product portfolio and the brand name Cognos would disappear but luckily it has not come to that.
Cognos Analytics 11 uses a completely new, modern interface that has a stylish look and feel. IBM has worked very hard on improving the user experience to make navigation and functionality as easy as possible. The screen layout looks much cleaner as functionality is only exposed on demand, by using sliding panels. Report viewing, authoring and browsing the content store are all done within the same browser window. The same space on the screen is used differently depending on the context. This provides an intuitive and user friendly interface. An extensive effort has been done to offer a seamless transition between traditional desktop browsers and mobile devices by using responsive layouts that will automatically resize depending on the screen size.
Figure 1 – New interface
Cognos Analytics 11 is not only positioned towards the professional report author but specifically towards power users and data scientists by offering Watson-like features such as natural language search and automatic proposal of charts.
With this release, data modules are introduced. Data modules are intended for business users to quickly build small and focussed modules. Unlike previous versions of Cognos BI it is easy to integrate multiple data sources in a single data module. Data modules can be used to build both dashboards and reports. This approach offers a much faster time to market for a specific report but, off course, does not undo any of the disadvantages of not having a datawarehouse with a single version of the truth.
A new dashboard module is launched that replaces Cognos Workspace. It is targeted as an end user driven data discovery tool. The dashboard module offers a real authoring tool with the ability to create charts and graphs versus reusing existing report components in Cognos Workspace.
Secondly a new reporting environment is introduced. This module replaces both Workspace Advanced, Query Studio, Analysis Studio and Report Studio. It combines the functionality of Report Studio and the live data preview that was available in Workspace Advanced. The reporting module is not only used for standard (active) reports but also targeted specifically as an end user tool for data analysis and discovery.
What about the other Studios that were available in Cognos 10? Analysis Studio, Query Studio, Event Studio Workspace remain available as companion apps but are not enabled by default. These apps are exactly the same as in Cognos 10.2.2 so all existing content from a previous installation will continue to work. However, the companion apps are merely included to allow a buffer period for migration to the new modules. Workspace, Analysis Studio and Query Studio will be dropped in the next release of Cognos Analytics.
The Cognos Connection portal was replaced by a new interface named Content Explorer that is much easier to use than the old interface. All the functions such as search, administration, consuming and creating reports is done from the same browser window. The new interface uses a series of sliding panels that show additional functionality when needed. Therefore the space on the canvas is used more efficiently and more content can be shown in the same screen space. The sliding panels can show both a basic view and a more advanced list style of view. Multiple reports/dashboards or data modules can be opened at the same the time and can be shown on the canvas by selecting the desired report in the upper, middle menu of the canvas.
Figure 2 – Welcome page
Figure 3 – Sliding panels with basic view
Figure 4 – Different reports opened at the same time
Public folders and Private Folder have been renamed to Team Content and My Content. These can be effectively searched with a new search engine. The results are always up to date and no longer require an index job to be created and executed regularly. By typing in a keyword results are returned. These results can be refined further by using the filter button. Search will include results for saved report output and column or table labels in data modules. Content that was archived is not searchable.
Figure 5 – Searching for Revenue
When a report is often used, the user can subscribe to it. Every time a new version of a report is available, the user will receive a notification informing a new version is available. It is only possible to subscribe to a report when a live version is viewed. The current prompt values are saved and the user is prompted to create a schedule. A user cannot subscribe to a report when viewing a saved report output. However, on a saved report output a notification can be enabled that will warn the user when a new version is available. Scheduling a report the traditional way is also possible by selecting the report properties and then hit schedule.
Just like in previous Cognos versions, report entries can be hidden. Using a checkbox in My Preferences, hidden entries can be shown. Hidden entries are never visible in search results. Hidden entries are uses internally as well. Subscriptions created by a user are actually report views, hidden in a folder that resides in My Content.
Figure 6 – Creating a subscription
Figure 7 – Notification of a new available version
Before taking a deeper look into data modules let us first discuss the functionality to upload data files. A user can directly upload Excel, CSV and tab-delimited text files into Cognos Analytics. The result can be used immediately to build dashboards without the need of using a data module. To be able to use data in Reporting, the file should be uploaded and incorporated into a data module.
Uploading text files is fairly basic, the only thing that can be changed is whether the field is a measure or dimension value. Data can be refreshed if new versions of the file would become available. Structural changes in the file, for example, a new column cannot be accommodated with the update process. A new upload definition needs to be created in this case. When a file is uploaded it is saved in My Content. It can be used directly to create Dashboards.
The content of the data file is not stored in the content store but on the file system using the Apache Parquet format. Parquet is a compressed, columnar format file storage that is used in Hadoop and allows for quick retrieval of the data. This is different compared with ‘External data’ and ‘My data sets’ in previous versions. These data sets were stored in DB2.
Figure 8 – Uploading a text file
Data modules represent a major shift in paradigm. Earlier, the use of a central metadata layer or framework was considered to be best practice. Users were encouraged to build reports using this central reporting package. In reality the datawarehouse often suffers from a serious backlog, removing the flexibility of quickly adding new sources in reporting. This void is covered by data modules. Data modules offer a light weight data modeling tool that allows for combination of multiple data sources. Data modules do not replace Framework Manager, Dynamic Cubes and Transformer. These tools remain available to handle more complex modeling challenges that require more bells and whistles. The main audience for data modules are power users and data scientists that quickly want to combine some data in order to start building reports. Frameworks and Dynamic Cubes are built by the BI competence centre or IT as it requires detailed knowledge of the underlying source models.
Data modules can be created using three source types: data servers, uploaded files and other data modules. Multiple input sources can be combined in a single data module. Data servers are nothing else than connections to a database. However when creating a data server a switch ‘Allow web-based modeling’ is enabled. By default, data sources that are created in the Cognos Administration module are not exposed as data servers. This means they are invisible for end users, so this property needs to be set manually. A data server will always use the JDBC connection of a data source. Therefore, only relational data sources and Hadoop based technologies can be used to build data servers. Multi-dimensional data sources cannot be used to build data modules and by extension dashboards.
While creating a data module, fields can be added by dragging and dropping but also by using intent driven modeling. By typing in a keyword such as ‘sales’ a number of matches will be shown. By adding the proposal, the tool will generate the entire data model. Pay attention not to make mistakes and test the model before saving: there is no undo button!
Figure 9 – Intent driven modeling
Figure 10 – Generated model
The modeling tool allows to set basic properties such as renaming tables and items. Hiding items or creating different subfolders like in Framework Manager is not possible. This is particularly annoying in a datawarehouse setting where fact and dimension tables are joined with Surrogate Keys. These keys cannot be hidden. It would therefore be our advice to join all appropriate data in advance in a view per star schema and expose these objects to the end users to base their data modules on.
Fields can be deleted from the model so all redundant fields that are not needed for the user can be removed from the data module to prevent the model from becoming too large when snapshots are used.
Another approach would be to have the BI team pre-build a number of base data modules that can be used as a source to combine with additional data. This method would both allow IT governance of data coming from the datawarehouse and the flexibility of adding new data by end users.
At table level additional filters can be set to restrict unneeded data. Since the data can be stored in a snapshot it is important to get rid of unneeded data to maximize performance. Joins between tables or data sources can be added or edited if needed. Custom calculations can be created at table level, allowing for creating basic calculations, grouping data but also cleaning text data. The interface in creating these functions is very minimalistic. By typing the first letter a list of functions will appear so a full overview of all available functions cannot be seen. Luckily, the full list is contained in the PDF documentation.
Figure 11 – Custom Groups
Data can be grouped in custom groups. A new case expression will be created that is added as an extra field to the table containing the custom group. Creating hierarchies is not possible, as data modules are not dimensional of nature.
Figure 12 – Text Cleaning
Text cleaning will handle basic cleaning of text attributes. It is possible to trim fields, convert to upper and lowercase or return a substring of values. Additional functions are not available here, but can be entered manually by modifying the field expression.
At field level a limited set of properties can be manipulated such as the usage, aggregate type and pre-sorting. The expression of every field can be modified adding extra flexibility when additional functions are needed to show the field in the appropriate manner. Data modules can be tested at any time. When clicking ‘Try it’ the reporting module will open with the data module as a source.
Data modules can be used in a live mode and snapshot mode. In live mode, every time a report is run, a query will be launched to retrieve the data. In snapshot mode all the appropriate data in the module will be stored on the file system using the Apache Parquet columnar file storage mechanism. The response times using snapshot mode are impressive, but the downside is the lack of live data. Snapshots cannot be scheduled to be refreshed automatically yet and can only be refreshed manually. This functionality will be added in upcoming releases. Snapshots also changes the backup strategy where previously backing up the content store was sufficient. When data modules are used, it is essential to also backup the data folder on the file system. Make sure when importing a deployment archive that ID’s are kept as the ID’s are the link between the metadata and the physical file.
Figure 13 – Snapshot properties
Access to the data module can be secured by using the default security features. Unlike Framework Manager or Dynamic Cubes, data access cannot be restricted to show certain rows or members to specific users.
In conclusion Data Modules should be considered as a light-weight solution that enable end users to quickly model simple flat files or data models without the advanced features that are available in other tools like Framework Manager or Dynamic cubes. It is perfectly suited to quickly prototype a report or add some data sources that are not yet available in the datawarehouse. When used responsibly, these modules will help the user to quickly add new data that can later be picked up by the datawarehouse team.
The dashboarding module is marketed as a self-service BI tool for end users. Using data modules or imported text files as a source, the user can easily build a combination of charts in an attractive layout. When charts are added, they are linked automatically so a filter applied to one of the charts will apply to all. When creating a new dashboard, 3 different canvas styles can be chosen: single page, tabbed and infographic. For each style a number of different layout grids are available. These responsive grids will automatically resize the content when viewed on other carriers like mobile devices.
Figure 14 – Creating a dashboard using templates
When creating a new graph, the Dashboard module will automatically try to determine the best suited chart for the data. Just drag a number of columns on the canvas and the tool will do the rest. Another method is by typing in a key phrase. Using Watson-like technology, the dashboard module will propose an appropriate visualization. Charts can also be added manually and filled using drag and drop.
Figure 15 – Creating graphs Watson style
A dashboard can contain data from multiple data modules. However to link the charts, only data from the same module can be linked. Data cannot be linked between data modules. There is a way around this by combining multiple data modules into a single data module, however this might not be trivial for the end user. Special care should be exercised to get the cardinality in data modules right, especially when using snowflakes. If not, the base dimension will be regarded as a fact, returning an error when trying to filter on the snowflake table. Dashboards can only be built from data modules or imported text files. Frameworks or cubes cannot be used as a source for dashboards. This means that only relational data sources can be used to build dashboards. It is not possible to build a dashboard on for example a cube.
Filtering the dashboard can be done by clicking on a chart item or by adding a single data item. The data is automatically filtered and the filtered data is greyed out. Multiple filters can be combined. The filter symbol will indicate what filters were used on the chart and can be reset by clicking anywhere in the chart except the bar or value itself. The filter itself is limited to the current tab page.
Figure 16 – Filtering in the graphs
You can also filter on data items that are not explicitly in the dashboard. At the centre bottom of the canvas is the data tray filter. This will allow the user to filter and or sort on any element in the data module even when it is not in the charts. This is a very powerful feature to effectively filtering data.
Figure 17 – Filtering using the data tray
The canvas can contain media, text and charts. The properties that can be set for charts and other objects are rather limited and will be expanded in future releases.
Figure 18 – Limited options
The dashboarding module allows business to quickly and easily create basic dashboards. We are very keen to see what future releases will bring in additional functionality. The use of dimensional data sources and drill up/down would be the number one feature on our bucket list. Our conclusion is that this tool is usable for the end user but needs some improving in terms of exposing more advanced properties and adding additional functionality and data sources to offer a credible alternative towards competitor dashboarding tools.