Different Types of Reports

Standard versus Ad-hoc | Condensed versus Detailed | Key Figures versus Master Data | Replication in the Data Warehouse versus Direct Data Access


The conventional method of allocating reporting tools according to responsibility for different types of reports, has proved to be a hindrance time and time again. This section will examine why this is the case and how the problems can be tackled:

Résumé: If you facilitate a smooth transition from ad-hoc reporting to reusable reports, the construction of a reporting landscape will be considerably easier. If top-down and bottom-up approaches are interpreted as two metaphors for navigation around the same data, it is not necessary to divide up reports into key figure reports and master data reports. A good Data Warehouse also makes real-time reporting possible.


Standard versus Ad-hoc


Many reports are executed not once, but many times. Therefore, they should not have to be recreated each time. Instead, it should be possible to re-access templates and fill them automatically with the new data. Whereas earlier, at the time of 3270 screens and matrix printers, there was relatively little freedom of structure, today, in the age of graphical user interfaces and color and laser printers, considerably higher demands are made. These, of course, increase the amount time spent on creating reports.

The more structuring options that reporting tools make available, the less the user will have to alter completed reports by hand. The days when pure ASCII output had to be painstakingly processed by hand in a text or graphic editor should definitely remain in the past.


However, if users would like a general view of certain facts for which there no pre-defined report exists, they need a tool that enables them to quickly compile this information themselves. It is important that they can quickly find the data they need. They need a tool that provides a business-operations view of the relevant data fields - characteristics, indicators and other attributes - and their links. This tool must be clear, and must help to find the fields needed by filtering and sorting.

If users have gathered together the data fields they want, they will then want to put them into a suitable layout geared towards their tasks. The requirements in such cases are normally less extensive than with standard reports, since the main requirement is not visual appearance but that they are functional - users want to create their reports quickly and be able to process data logically in ways that are best suited to their needs.

Blurring the Boundaries

However, ad-hoc reports are often called up not once, but several times. Users should not have to choose between two different options - they should be able to save an ad-hoc report and rework the visual image if necessary. Since, in many companies, IT personnel no longer have sufficient capacity to create all of the required reports, it is an advantage if end users can create standard reports themselves and leave only the fine-tuning to the IT specialists.

On the other hand, the IT department or software house that has developed the tools for creating reports can deliver special modules and templates (with the necessary powerful functions) that make the creation of ad-hoc reports considerably easier and quicker, meaning that report creators need not always start from scratch. In many cases, creating a report can also be helped if report templates exist that can be adjusted to meet current requirements with suitable parameters when executing the report.

What does this mean?

There are a growing number of options available for creating ad-hoc reports are increasing and the transition to reusable reports is becoming smoother. The desired result is being obtained more quickly and workload is being reduced in IT departments.

Back to top

Condensed versus Detailed


In the case of reports with condensed data the main focus is on data that can be aggregated. Usually the top-down method is used, that is, users start at a relatively high level of condensing and then display successively more details for important data (zoom in or drilldown). Experts also change (navigate) between different views (slice and dice).


The other approach is the bottom-up approach. This is used mostly for reports that either contain hardly any data fields that can be aggregated, or for those in which the main focus from the start is on a detailed and complete overview of all selected data. To get an overview of the structure, particularly with large quantities of data, grouping and highlighting with color and font attributes are used.

Blurring the Boundaries

It is becoming more and more evident that the traditional drilldown method is not always the method of choice. Especially in cases where users want an overview of details from several areas, it is more sensible to use a display hierarchy in which the individual branches can be expanded and compressed. This corresponds to presentation in a hierarchical detailed overview in which only the upper nodes are displayed in the initial screen.

On the other hand, in the case of detailed overviews users also want to be able to aggregate the condensed data according to the hierarchy or group level, so that even here, the boundaries for condensed reports are blurred. It is not only the classic key figures (that is, numerical values: Quantities, amounts, and connected values) that can be aggregated here (see also: Statistics versus master data).

Due to the fact that many users do not work exclusively with one particular type of report, it is important to provide user interfaces via which comparable operations can be carried out in the same way. Whether these demands are best met with one tool or with two similar tools is a question that is often discussed, and there are many arguments for and against both options (see also: One tool versus several tools).

What does this mean?

The standardization of the two paradigms that originally seemed to be quite different, means that users do not have to decide which tool to use for processing data. They can decide there and then which tools to use for navigating through data, and mixed views are possible.

Back to top

Key Figures versus Master Data

Key Figures

Traditionally, when reporting about key figures, there is a strong distinction between key figures and characteristics. Key figures are numerical and can usually be condensed with simple formulas (but sometimes only with very complex formulas, or even not at all). In contrast to this, characteristics are normally not numerical. Instead, they give criteria on the basis of which key figures can be gathered together. Key figures are determined clearly by the combination of characteristics. In other words, a combination of characteristics provides the key by which the data record can be identified.

There is a distinction here between Analyses (that are, for example, distributed in controlling, and in which key figures are analyzed from completely different angles and therefore show a higher amount of dynamics in the layout) and more Static Reports (such as those that are needed for financial accounting) (see also: Flexibility versus Layout).

Master Data

Traditionally, when reporting about master data, there is no separation of characteristics and key figures. Usually people simply talk about Attributes that can show various properties. In additions to the question of whether the reports can be condensed or not, it is also important to consider other questions, such as, how often the values can be used.

There are also two extremes in this case: Very flexible Analysis reports or Ad-hoc Analyses that are used often, for example, in Human Resources, or Fixed Overviews, in which the format is more similar to that of traditional reports.

Blurring the Boundaries

For a long time, tools for creating statistical and master data reports were clearly distinct from one another, not only at SAP. This was partly due to the different demands caused by different program architectures, and partly due to the fact that other types of report are also significant in different application areas.

In practice it is becoming clear that other factors are needed to explain key figures. These provide not only more detail in the case of more complex key figures, by breaking down the process of calculating reports, but also additional attributes of the characteristics involved.

However, attributes in master data reports are often numerical, that is, key figures. Often users wish to condense these key figures according to a particular point of view on an ad-hoc basis. In practice this is the case especially with hierarchies or when forming groups.

In addition, sometimes-large amounts of attributes, which are not numerical, are condensable. In the case of dates and times, an earliest possible/latest possible date or time (for example earliest start, latest end) is often helpful. Moreover, counters for the number of entries or for the different attribute values within these entries are important. Together with other key figures they can deliver important information.

Due to considerable merging of departments that also need process-orientated general reporting, the sharp distinction between key figure and master data reports is becoming more of a hindrance. It therefore has to be removed.

What does this mean?

Users who want to create reports that can combine key figures and other attributes, finally have a tool that is suitable. Regardless of application, they can easily swap between key figure-orientated and master data-orientated views or methods of navigation, without having to change to another tool.

Back to top

Replication in the Data Warehouse versus Direct Access

Replication in the Data Warehouse

A data warehouse has clear advantages in reporting: Data can be accessed more quickly because it has been stored in the way best suited to analysis. Not only are the system parameters for displaying data optimized, but suitable indexes, aggregates and joins are also stored in the database. Historical data can also be accessed and consistent analyses using several operative systems and even external data (for example via the Web) are possible.

This advantage of the data warehouse is won at great cost, in that additional systems (with additional administrative expense) and additional memory are needed, and that the data is, for performance reasons, usually not as up-to-date as in the operative system.

Direct Access

In the case of large amounts of detailed data, the decision has to be made as to whether the data has to be replicated in its entirety in the data warehouse, or simply in condensed form. If users have to create up-to-date reports, they can still access the data in the operative system, since usually data is replicated in the data warehouse only at certain intervals.

Of course, it is preferable for not all of the convenience of the data warehouse to be lost, for example, neither the general data model, nor the operating interfaces or display options. Therefore it is possible to allow direct access to operative data using the data warehouse tools, rather than offering a further reporting tool in every operative system.

This has the following advantages: Firstly it ensures that user interfaces are identical (or nearly identical). Secondly, a uniform data model can be accessed. Thirdly, operational business is not so burdened by reporting. Fourthly, as a rule, reporting programs can be replaced more frequently by new, improved versions than operative systems can. In this case, direct access is also improved.

Blurring the Boundaries

A fluid transition is created to a seamless combination of data from the data warehouse and data from operational systems. If historical data has to be combined and compared optimally and across systems with real-time data, one central reporting tool has to be used, and not separate tools that are distributed throughout the system. "Combination" does not only mean combining data that is entirely in the data warehouse with data that is entirely in the operative system. More often it is a case of obtaining data from a table in the data warehouse whenever possible and adding the missing current data from the operative system - naturally without the users knowing that the data comes from two different sources.

Only when the efficiency of commercially used computer systems is increased, and the cost of memory space falls, will it be possible to transfer even large amounts of detailed data in real-time to the data warehouse, as is required for higher flexibility. Moving old data to near line storage, which is 10 times cheaper than traditional methods of storing data, but which requires a somewhat longer access time, is one way of saving on memory costs that already exists. However, even in this scenario, the distributed storage of data has to be hidden from users and only regulated in the warehouse administrator.

What does this mean?

The advantages of general reporting are combined with the advantages of real-time analysis. This allows quick, timely decisions to be made on the basis of comprehensive and consolidated information using a uniform user interface.

Back to top

Source:  Reconciling Conflicts in Reporting