Kofax Analytics for TotalAgility
When KAFTA has no data, no data past a certain point, or shows data different than expected, the source of the problem could be within KAFTA, however much of the time the issue is in the source data in the Reporting database in TotalAgility. So while this guide is focused on KTA Reporting, this is generally the correct place to start troubleshooting a “KAFTA issue.” For true KAFTA issues addressed in KAFTA fix packs, see Issues resolved in KAFTA 1.4 fix packs.
Basic Flow of Reporting Data in KTA
Writing raw reporting data to the reporting staging database
All components in KTA that process activities will write raw reporting data messages into the wsa_messages table in the reporting staging database. For an on-premise installation this would default to TotalAgility_Reporting_Staging.dbo.wsa_messages, while an on-premise multitenant installation would be TotalAgility_TenantName.reportingstaginglive.wsa_messages or TotalAgility_TenantName.reportingstagingdev.wsa_messages depending on whether it is a live or dev environment.
Processing reporting messages
In the System Tasks section of the KTA Designer, there is a Reporting task which has a default schedule of every minute. When the Kofax TotalAgility Reporting service (Kofax.CEBPM.Reporting.TAService.exe) takes the available reporting task, it starts an instance of the ETL process (Kofax.CEBPM.Reporting.AzureETL.exe) to run the ETL job that processes a batch of available messages out of the wsa_messages table. Within the course of the ETL job raw data is extracted from the encrypted messages out into the other tables of the staging database, and then transforms the data into the form that is copied in the main TotalAgility_Reporting database (the data warehouse).
Data available in data warehouse database until retention period ends
The data in the TotalAgility_Reporting database is then used externally, for example, by execution plans in the KAFTA project, or by custom reporting. Field data is only retained for five days by default, while other document data is retained for 3650 days (ten years) by default. For more details, see:
Data persists in KAFTA
Under normal operation, KAFTA loads the latest data hourly and this data will persist in the KAFTA_Data database even when it is removed from the reporting database by retention settings. However, manual data loads with date ranges beyond the range of data still available in the reporting database can lead to data loss. For details see:
Avoid a gap in reporting data while troubleshooting a problem
Because the retention period for field data is short, it could be that by the time a problem is solved, some data is immediately subject to be deleted because it is older than the data retention period. This would then lead to a gap in reporting data for that period of time that was deleted. To avoid this, take preparations before resolving the problem:
Backlog in wsa_messages
A notable type of problem is one where the reporting service is not able to successfully process raw reporting data out of the wsa_messages table, thus either no data, or no data past a certain date, will be available in the main reporting database or in KAFTA. If there are a large number of messages and the number continues to rise, then there are likely errors being written to the logs. To check for and quantify this problem, see:
If a smaller number of messages remain unprocessed, these may be orphan messages. For more detail, see:
If an error is occurring that prevents wsa message processing, then determining the specifics will require analyzing the reporting logs. However it will likely also be required to increase the logging level so more information about the problem is available. To increase the logging level and collect the logs, see:
When investigating certain issues, it can be helpful to quantify the amount and date range of data that was processed in a specific time. To do this, see:
Specific Types of Problems
Documents marked completed
Although document data is always sent to reporting, KAFTA only uses data from documents that have been marked completed. There are several ways to mark a document as completed.
One type of problem that can prevent the reporting service from processing is if timeouts occur in certain database operations. Many such problems can be addressed by increasing the timeout. For more details see:
An area that warrants specific attention is that KTA 7.5 and lower will drop and recreate table indexes as part of the reporting task. This is unnecessary and no longer occurs in KTA 7.6 and higher, but can be manually disabled in lower versions. Disabling this is better for performance and prevents a timeout that can occur at this point. For more details see:
Many of the queries used to process reporting data are MERGE queries. Thus one of the types of problems that can occur is the generic SQL error that says that the “MERGE statement attempted to UPDATE or DELETE the same row more than once.” The reason it is important to recognize this as a generic error is so different causes of MERGE errors can be distinguished. For example, even though there is a MERGE error fixed in 188.8.131.52, a different one is fixed in 184.108.40.206. Thus the importance in looking beyond the generic error and paying attention to the other specifics, such as the preceding events in the log, and the specific stack trace.
An example focusing on these specifics:
Data different than expected
When no errors are occurring, but data in the reporting database seems different than is expected, it is best to try isolate a specific controlled action that produces different data than is expected. For example when a higher than expected number of field changes was reported, it was narrowed down to confirming table cells, and finding that specifically with table cells, even confirming without modification was recorded as a field change. This issue was fixed in 220.127.116.11: 1327287 Case 25056955: [Reporting] Confirming table cells without modification shows as a changed field
Version Specific Information