Rollup Image Intelligent Data Quality DescriptionData quality is a key word in Life Science. This comes as an effect of regulatory requirements and business integration setting the scene for corporate efficiency and cost-savings directly linked to the level of data quality. This can for example be seen when dealing with reports, where poor data quality will cause misinformation and countless hours spent on investigating data issues and mismatches.When moving from legacy systems to new solutions, issues and concerns regarding the data quality and migration readiness will often occur. This comes from new data models with modern requirements to data structure and consistency, challenging the move from legacy systems. With Intelligent Data Quality we offer a solution to improve data quality prior to migration to ensure that your new system gets off to a good start. Finally, we offer natural language processing making it possible to extract structured data from documents to use in comparison with structured data, prepare documents for migrations and to provide better overview of documentation, for example spotting duplicates. A data quality analysis can prevent or solve the above challenges by detecting the quality of the data measured against both self-defined data metrics as well as standards from other sources such as RMS and GINAS. There are six components to ensure a high data quality:Completeness - the data is completed and there are no gaps in what is collected and what was supposed to be collectedAccuracy - the data is correct and accurately represents the real world Validity - the data values are within the domains specified by businessTimeliness - the data is reflecting reality at the expected timeConsistency - the data is consistent across all records and must align with the expected data being collectedUniqueness - the data is unique and not a duplicationThe offering is built on a pure cloud infrastructure making it possible to utilize major computational and statistical power. The cloud platform enables utilizing artificial intelligence and machine learning in data and document analysis. This means the ability to trend major datasets and extract data from natural language in documents.Data profilingThe data profiling shows the as-is situation of the data quality, which will be a large step on the way towards the to-be; a more structured, consistent and reliable set of data. This analysis forms the foundation for assessing which data to focus on improving and thereby scoping the enhancement phase.We use a data profiling tool which accepts different data sources as input and will output a thorough analysis of the data quality measured against defined data metrics, as well as external sources such as RMS. Along with the data analysis, NNIT will provide recommendations for how to proceed in order to improve the data quality.Data enhancementThe data quality can be improved through a data enhancement, using an automated transformation engine. We have two ways of conducting a data enhancement: a standardized working model and an AI document data intelligence. The following five steps show our approach to completing any data enhancement project. Actions from data profiling - the suggested actions from data profiling will be the starting point for the data enhancement track. This also prepares the different enhancement approaches Selecting approach for data enhancement - choose specific approach for data enhancement by defining the elements extraction, load and transformation requirements.Define rules for data enhancement - use business and technical knowledge to prepare business and technical route that focus on creating rules in general and the user testEstimating and monitoring enhancement effort - using business involvement and transformations to enhance the data. This is done by tracking in the profiling reports and happens continuallyCreate and distribute load files - based on the transformation and manual enhancements data load files can be generated in a suitable format. Thereby fitting to a data model and application field constraints Standardized working modelThis approach consists of developing a data transformation engine and executing a series of cleaning and enhancement activities. The purpose of this phase is to enhance data and deliver final load files using manual, automated or intelligent transformations. AI document data intelligenceThis approach has been developed due to an often-seen challenge of limited data about documents, which makes it difficult identify and find the correct document in time. This is both an issue in migrations and during stable operations. If a traditional way of handling this challenge was to be followed, users would face substantial workload to consolidate metadata for documents manually. This can be more efficient by using an Artificial Intelligence engine which is faster and more precise in evaluating metadata. Migrate and InterfaceNNIT offers the potential for preparing the data for two separate purposes. One in a stable scenario, where we look at data quality from a system that is in production that we want to enhance or report on. Secondly, we offer the potential for preparing migration ready load files. These will be prepared to emulate the data model of your new system making it very simple and easy to upload or migrate data.When setting up a data quality monitoring solution the emphasis is to provide making it possible for you to make better, smarter and more insightful decisions founded in real data, regardless of this coming from documents or structured data. In the data preparation for migration what you will gain is the ability to start new applications on a data foundation fitting with future data models and quality standards.