How to Import Data

When you want to add or update data, import it into CA APM using the Data Importer. The data that you import is inserted, or updates existing data, in the CA MDB.
casm1401
When you want to add or update data, import it into CA APM using the Data Importer. The data that you import is inserted, or updates existing data, in the CA MDB.
In this scenario, the system administrator performs the data import. However, the administrator can grant Data Importer User Access or Data Importer Admin Access to any CA APM user role. User access allows users to create imports, modify or delete their own imports, and view any import that is created by another user. Admin access allows users to create imports and modify or delete any import that is created by any user.
The following diagram illustrates how a system administrator imports data.
Diagram depicting how to import data
Diagram depicting how to import data
To import CA APM data, perform these steps:
Example: Import New Employees
Sam, the CA APM system administrator at Document Management Company, wants to import a group of new employees. The new employees use the product to manage hardware assets. Sam received a comma-separated value (CSV) source data file from Human Resources that contains the employee information. All the new asset manager employees belong to the IT department and work at the company headquarters. However, the source data file also contains data about some new employees who work at other locations and do not belong to the IT department.
Sam only wants to import the data for the IT employees at headquarters. Using the Data Importer and the source data file, Sam creates a data import to incorporate the new employees into the CA MDB. To ensure that only employees in the IT department at headquarters are imported, Sam creates an exclusion filter. After Sam runs the import, he checks the import statistics and the import log file and user interface to verify that the import was successful.
Review the Prerequisites
To ensure that you can successfully import the data, verify that you have completed the following tasks:
  • Prepare a source data file in delimited text format (for example, tab or comma). This file contains the data that you want to import.
    We recommend that you include the main destination object in the name of the source data file. This file naming convention helps you locate your data files when you create your import. A value of NULL in your source data file clears the corresponding destination field value. An empty field in your source data file leaves the corresponding destination field value unchanged.
    If a data value in your source data file contains the selected delimiter, you must use double quotation marks around the data value. For example, you select a comma as the delimiter to import companies. You want to include the data value Document Management Company, Inc. in your source data file. Specify this data value with double quotation marks: "Document Management Company, Inc".
  • (Optional) Copy your source data files from your local server to one of the following locations. You can access these locations on the CA APM application server where the Storage Manager Service is installed. The location depends on whether you are using multi-tenancy.
    [ITAM Root Path]\Storage\Common Store\Import [ITAM Root Path]\Storage\Tenant_Name\Import
    If you copy the data file before you create an import, you can then specify the file name when you create the import. If you do not copy the data file first, you can upload the file from your local server when you create the import.
  • (Optional) Copy your legacy map files from a previous product release (if you have these files) from your local server to one of the following locations. You can access these locations on the CA APM application server where the Storage Manager Service is installed. The location depends on whether you are using multi-tenancy.
    [ITAM Root Path]\Storage\Common Store\Import [ITAM Root Path]\Storage\Tenant_Name\Import
Create a Data Import from a Data File
You create a data import using a source data file (delimited text file) that contains the data that you want to import. You select the file, configure the import parameters, and specify the delimiter (for example, a comma or a tab) that separates the data in the file.
You can also create a data import using a legacy map file from a previous product release. For more information, see creating a data import from a legacy map file.
Follow these steps:
  1. Log in to CA APM as the administrator.
    In this scenario, the system administrator performs the data import. However, the administrator can grant Data Importer User Access or Data Importer Admin Access to any CA APM user role. User access allows users to create imports, modify or delete their own imports, and view any import that is created by another user. Admin access allows users to create imports and modify or delete any import that is created by any user.
  2. Click Administration, Data Importer.
  3. Click New Import.
  4. Enter the required information in the Basic Information area and supply optional information as needed. The following fields require explanation:
    • Data File
      Specifies the source data file that you want to import.
      If this file is available on the CA APM application server, search for the data file and select the file. If this file is not available on the application server, upload the file.
    • Upload File
      Browse on your local server for a source data file that you want to import or a legacy map file that you want to use to create mappings. This file is uploaded to the CA APM application server.
      The file size is limited by the product environmental settings. For more information, contact your administrator.
    • Main Destination Object
      Specifies the main object for the import.
      Asset and Model objects are listed with their corresponding families. You can also specify All Families. Legal Document objects are listed according to legal template. You can also specify All Templates. The objects include all objects that can be imported.
      For assets or models that include multiple asset family types or legal documents that include multiple legal templates, use the following selections for this field. Specify the particular family or template for each record in your source data file.
      • For an asset, select Asset (All Families).
      • For a model, select Model (All Families).
      • For a legal document, select Legal Document (All Templates).
      Ensure that you select the correct main destination object for your import. You cannot change the main destination object after you save or copy an import.
    • First Row Has Column Names
      Specifies whether the first row in the source data file contains the column names. If the first row does not contain the column names, the names display as generic names, such as Field 1 and Field 2.
    • Tenant
      Specifies the tenant that applies to the import (if you are using multi-tenancy).
      You can select a tenant only when multi-tenancy is enabled in CA APM and you are authorized to access different tenants. If you have access to public data and you have multiple tenants, you can select all tenants.
      If you specify all tenants, your source data file must have a tenant name column that you map to the Tenant Name field.
      If you specify one tenant, verify that all data in your source data file belongs to your selected tenant. If you have data for more than one tenant, data for all tenants is imported into the selected tenant.
    • Data Delimiter
      Specifies the delimiter (for example, comma or tab) that you used in the source data file.
      If a data value in your source data file contains the selected delimiter, you must use double quotation marks around the data value. For example, you select a comma as the delimiter to import companies. You want to include the data value Document Management Company, Inc. in your source data file. Specify this data value with double quotation marks:
      "Document Management Company, Inc"
    • Data File Locale
      Specifies the locale for the source data file. This setting determines the date and time format.
  5. Enter the required information in the Advanced Settings area and supply optional information as needed. The following fields require explanation:
    • Maximum Error Threshold (in %)
      Defines the number of errors after which the import stops. The threshold is based on the percentage of records processed. We recommend a minimum threshold of 15 percent.
      The Data Importer processes the number of records that are specified on Administration, System Configuration, Data Importer (Maximum Batch Record Size field) before calculating if the error threshold has been reached.
    • Primary Lookup Object Processing Type
      Specifies the type of import activity (for example, insert or update).
    • Create Secondary Lookup Object
      Creates new secondary lookup objects during the import process. If this option is not selected and a secondary object does not exist, an error occurs.
    • Update Secondary Lookup Object
      Updates the existing secondary lookup objects during the import process. If a secondary object does not exist, an error occurs.
    • Error on Secondary Lookup Object Errors
      Indicates that the Data Importer does not process a primary object insert or update if the secondary object process fails. If a secondary object insert or update process fails and this check box is selected, the insert or update for the primary object also fails. If this check box is not selected, the primary object is created or updated (as long as the object is not dependent on the secondary object). However, the secondary object value is not created or changed. In both situations, the secondary object error is logged in the import log file.
      Example:
      You import a location and the location has a country. If the import fails while trying to update the country object and this check box is selected, the location record is not created. If this check box is not selected, the location record is created, and the country information is not updated.
    • Normalization Behavior
      Specifies whether to normalize the data or write an error message in the log file without normalizing the data.
      This field appears only if you have defined normalization rules.
      • Error on Normalization
        Writes an error message to the Data Importer log file when data that can be normalized is found in the data that you are importing. The data involved is not imported. The log file error message includes the details about the data.
        For example, your data includes the company name Microsoft. The company normalization rules that you created identify Microsoft as a collected (nonauthoritative) value and specify Microsoft Corporation as the normalized (authoritative) value. If you select this option when importing your data, the object with the company name Microsoft is not imported and an error message is written to the log file.
      • Apply Normalization without Error
        Uses the normalization rules to normalize the data that you are importing. If data that can be normalized is found, the data is normalized and imported. No error message about the data is written to the log file.
        For example, your data includes the company name Microsoft. The company normalization rules you created identify Microsoft as a collected (nonauthoritative) value and specify Microsoft Corporation as the normalized (authoritative) value. If you select this option when importing your data, the object with company name Microsoft is normalized. In this example, the company name is changed to Microsoft Corporation and the associated object is imported.
  6. Click Save.
    The import is saved. The Mapping, Exclusion Filter, and Schedule areas of the page are now available for your input.
Example: Create a Data Import of New Employees from a Data File
Sam, the CA APM system administrator, performs the following actions to create the data import:
  1. Navigates to Administration, Data Importer and clicks New Import.
  2. Enters New Employees.csv in the Data File field.
    This CSV file is the source data file that Sam received from Human Resources with the new employee information.
  3. Selects Contact for the Main Destination Object and comma for Data Delimiter.
  4. Selects Insert or Update in the Primary Lookup Object Processing Type field and clicks Save.
Create a Data Import from a Legacy Map File
You can create a data import using a legacy map file from a previous CA APM release. The map file defines the corresponding data file and the import parameter settings.
We recommend that you copy your legacy map files and corresponding data files to the CA APM application server before you create the data imports. However, if necessary, you can use the optional steps to upload a legacy map file.
You can also create a data import using a data file only. For more information, see creating a data import from a data file.
Follow these steps:
  1. Click Administration, Data Importer, New Import.
  2. Click Search and Load Map to select a legacy map file that is already available on the CA APM application server.
    The corresponding data file must also be available on the CA APM application server.
    If the legacy map file is not available on the CA APM application server, upload the file using the Upload File field.
  3. (Optional) Upload a legacy map file that is not available on the CA APM application server using the following steps:
    1. In the Upload File field, browse on your local server and select a legacy map file.
      The legacy map file is uploaded and is displayed in the Upload File field.
    2. Click Search and Load Map and select the legacy map file that you uploaded.
      The legacy map file is displayed in the Legacy Map File field.
      The Basic Information is loaded.
    If you receive a warning about the source data file, upload the data file using the Upload File field.
  4. Specify the Advanced Settings and click Save.
    The Exclusion Filter and Mapping data mapping are loaded. The Mapping, Exclusion Filter, and Schedule areas of the page are now available for your input. The Mapping and Exclusion Filter areas display the data from the legacy map file.
    For information about specifying the Advanced Settings, see creating a data import from a data file.
Map Data File Columns to Data Fields
You can map the columns in your source data file to fields in CA APM. You perform column mapping to specify where the source data is imported. You can select most objects and associated fields as destination fields during column mapping.
If you created your data import from a legacy map file, the column mapping exists. If you want to change the values, you can edit the existing mapping rules. You can also add or remove mapping rules and filters.
When you log in, the user role that your administrator assigned to you determines the objects and fields that you can see and use. If your role specifies that you do not have permissions for an object field, the field is not available for a mapping. You can only create a mapping and import data for the objects and fields for which you have permissions.
We recommend that, before you map data, you review the CA APM user interface to determine the required information for a mapping. For example, review the Asset page to see that the asset name, asset family, model, and class are required. Because a model is required to create an asset, you review the Model page to see that the model name and asset family are required. By reviewing the user interface before you create a mapping, you ensure that you have all required information to create a mapping.
Follow these steps:
  1. On the Administration tab, Data Importer page, in the Mapping area for a selected import, click New or click Load Source Fields.
    • New allows you to select the source fields individually from the source data file.
    • Load Source Fields adds all source fields from the source data file.
    If you have existing mappings, Load Source Fields allows you to replace those mappings with the source fields in the source data file. This option also allows you to add the source fields from the source data file that you do not already have in your mappings.
    1. If you clicked Load Source Fields, click the Edit Record icon next to a field.
  2. Click the Select icon next to Source Field (if this field is empty), select a column from your data source, and click OK.
    If this field already contains a source field (because you loaded all source fields), you can skip this step.
    The percent signs that appear before and after the column names identify the names as column headers in your source data file. You can also specify a hard-coded value in the Source Field that you want to apply to all records in your source data file. You can then map the hard-coded value to a Destination Field. The hard-coded values do not display with percent signs so that they can be distinguished from the source data file column names. For more information, see hard-coded values.
  3. Click the Select icon next to Destination Field, select a Destination Field for the selected Source Field, and click OK.
    The destination fields that appear are based on your selected main destination object.
    The destination fields display in hierarchical order. For example, the fields that are listed under Asset Type Hierarchy are Asset Family, Class, and Subclass. The order of the fields in the list represents the field hierarchy. Follow the field hierarchy when you specify mapping rules. For example, for Asset Type Hierarchy, specify a rule for Class before you specify Subclass.
  4. Select the Primary Lookup and Secondary Lookup checkboxes as required.
    1. Select a Primary Lookup check box for each destination field that you want to use to find the primary object. Use the following guidelines when selecting this check box:
      • Select at least one Primary Lookup check box in the column mapping for an import.
      • Do not select this check box if the Destination Field is Note Text (under the Note object). The database data type for the Note Text field does not allow it to function as a lookup field.
    2. Select a Secondary Lookup check box for each destination field that you want to use to find the secondary objects. Use the following guidelines when selecting this check box:
      • Do not select this check box if the destination field is not one of your lookup fields for the secondary object.
      • Do not select this check box if the Destination Field is Note Text (under the Note object). The database data type for the Note Text field does not allow it to function as a lookup field.
  5. Click the Complete Record Edit icon.
  6. Click New again, or click the Edit Record icon next to another source field, to specify more mapping rules.
    To delete a specific mapping rule from the list of mapped columns, click the Deletion icon next to the mapping rule. The column mapping rule is removed from the list.
  7. Click Save.
    Your column mapping is saved.
Example: Map Data File Columns to Data Fields
Sam performs the following steps to map the data file columns in the source data file to the CA APM data fields:
  1. Clicks New in the Mapping area of the Import Details page.
  2. Selects
    %Login ID%
    in the Source Field by clicking the Select icon next to Source Field and selecting this item from the dialog.
    The items that are listed in the dialog are the columns from the source data file.
  3. Selects User ID in the Destination Field by clicking the Select icon next to Destination Field and selecting this object from the dialog.
  4. Selects the Primary Lookup check box.
  5. Continues to map the remaining columns in the source data file with CA APM data fields and clicks Save when finished.
Review the Mapping Reference Material
Reference the following information when setting up the column mapping for importing or deleting data.
Primary and Secondary Lookup Combinations
The fields that you select as the primary and secondary lookup in your column mapping are used to search for data in the product database.
  • Simple mapping
    In simple mapping, you specify only the primary lookup. For example, you are importing or deleting a set of company records from a text file into the product database. You specify the Company Name as the primary lookup. If a company with a particular name does not exist in the database when you are importing data, a record is created for the company. The following table shows an example of the lookup for a simple mapping.
Source Field
Destination Field
Primary Lookup
Secondary Lookup
%Company Name%
Company.Company Name
Yes
No
  • Reference field mapping
    In reference field mapping, you specify primary and secondary lookup values. To search for a unique object, specify more than one primary lookup. For example, to search for a company, you can specify Company Name, Parent Company, and Company Type as primary lookup values. In this example, the Data Importer searches for a company with the specified name, the specified parent company, and of the specified company type. If the object does not exist and you are importing data, the record is created (depending on the insert or update option you selected in Advanced Settings). The following table shows an example of the lookup for reference field mapping.
Source Field
Destination Field
Primary Lookup
Secondary Lookup
%Company Name%
Company.Company Name
Yes
No
%Parent Company%
Company.Parent Company.Company Name
Yes
Yes
%Company Type%
Company.Company Type.Value
Yes
Yes
This mapping has both the Primary Lookup and the Secondary Lookup checkboxes selected for Parent Company and Company Type. The Data Importer uses the Company Name to look up the parent company and uses the Parent Company to look up the company name.
  • Secondary object mapping
    If a mapping rule maps to a secondary object property, the primary lookup values establish a relationship between a secondary object and the reference fields. The following table shows examples of the lookup for a secondary object mapping.
Source Field
Destination Field
Primary Lookup
Secondary Lookup
%Comment%
Legal Document.Legal Party.Comment
No
Yes
%Legal Document ID%
Legal Document.Document Identifier
Yes
No
%Company Name%
Legal Document.Legal Party.Legal Party.Company Name
Yes
Yes
%Legal Template%
Legal Document.Legal Template.Template
Yes
Yes
In the first mapping rule, Legal Document is the primary object, and Legal Party is the secondary object. Comment is a property of Legal Party.
In the third mapping rule, Legal Document is the primary object, and Legal Party is the secondary object. In addition, Legal Party has a reference field in the Company table. The Secondary Lookup check box indicates that the Company Name is used to look up the Company object. The Primary Lookup check box indicates that the Company object is used to look up the Legal Party object.
Hard-Coded Values
In the column mapping, the percent signs that appear before and after column names identify the names as column headers in your source data file. You can also specify a hard-coded value in the Source Field that you want to apply to all records in your source data file. You can then map the hard-coded value to a Destination Field. The hard-coded values do not display with percent signs to distinguish these values from the source data file column names.
Hard_coded_values.PNG
  1. Source data file column header
  2. Hard-coded value
You can define a hard-coded value in the Source Field to expand your source data and to ensure that you include all required fields. Hard-coded values typically do not begin and end with a percent sign (%). If you have hard-coded values with percent signs, the values cannot match the field names in your source data file.
Example: Use hard-coded values for asset family
In this example, the assets in your source data file do not contain asset family, which is required when creating an asset. You can add a hard-coded value to your mapping. If all of your assets are hardware, you can enter Hardware in the Source Field. You can map this value to the Asset Family field. If your assets belong to different families, add a column to your source data file with the corresponding asset families before importing or deleting data.
The following information illustrates the difference between values from your source data file and values that are added through hard-coded values:
  • You have an Asset Family column in your source data file. The selection in the Source Field is %asset family%.
You do not have an Asset Family column in your source data file. However, all of your assets are hardware assets. You specify a hard-coded value of Hardware in the Source Field.
You can also use the Main Destination Object to specify that all records in your source data file belong to a particular family or template. For example, the Asset (Hardware) selection for Main Destination Object specifies that all source records belong to the hardware asset family.
Multiple Values for a Single Field
You can add a mapping with multiple Source Field values that are mapped to a single Destination Field.
Example: Use multiple values for a single field
Your source data file has two columns with the names Manufacturer and Catalog Name. Combine these columns by selecting both in the Source Field. In this example, the Source Field selection is %Manufacturer% %Catalog Name%.
You can also enter multiple hard-coded values in the Source Field (for example, Document Management Company %model name% IT Department).
Filter Data in the Import
You can identify a subset of records in your source data file that you want to exclude from the import. The Data Importer exclusion filter allows you to filter a part of your data source using exclusion filter rules.
Example: Define an exclusion filter to process returned assets
A CSV file that you receive from your hardware vendor includes assets that were ordered and returned to the vendor. You want to process only the returned assets, so you want to import data to update those records only. You define an exclusion filter to exclude records that do not have a status of Returned.
Follow these steps:
  1. On the Administration tab, Data Importer page, Exclusion Filter area for a selected import, select the Filter Type.
    • And
      Excludes a record from the source data file only if all the rules that you specify are valid for the record.
    • Or
      Excludes a record from the source data file if any of the rules that you specify is valid for the record.
  2. Click New.
  3. Click the Select icon next to Source Field, select a column from your source data file, and click OK.
    The percent signs before and after the column names identify the names as columns from your source data file.
  4. Select the Operator.
    To specify "not equal to", select the "<>" operator.
  5. Enter a Filter Value for the rule.
    You can use special characters and wildcards in the filter value. The rules can process text, numeric, and date fields.
  6. Click the Complete Record Edit icon.
  7. (Optional) Click New and specify more exclusion filter rules.
  8. Click Save.
    The exclusion filter rules are saved and are applied when the import processes.
Example: Create an Exclusion Filter
Sam performs the following steps to create an exclusion filter. The filter eliminates non-IT employees and employees who do not work at the company headquarters from the data import.
  1. Selects And for the Filter Type and clicks New in the Exclusion Filter area of the Import Details page.
  2. Selects
    %Department%
    for the Source Field.
  3. Selects <> for the Operator.
  4. Enters IT for the Filter Value.
  5. Clicks the Complete Record Edit icon and clicks New.
  6. Selects %Location% for the Source Field.
  7. Selects <> for the Operator.
  8. Enters Headquarters for the Filter Value.
  9. Clicks the Complete Record Edit icon and clicks Save.
Submit the Import
To start an import immediately, click Submit in the Schedule area of the page. The data source records from the data file for the selected import are processed.
You can specify a data file other than the default (from the Basic Information) if you want to use a different file.
You can also schedule the import for a particular day and time. For more information, see schedule the import.
To view the import jobs for your current selected import, click Associated Jobs on the left side of the page. To view all import jobs for all imports, click Import Jobs on the left side of the page. In the list of import jobs that appears, click Status Message to view the status of an import.
You can also view the log file for more information about the import activity. In the list of import jobs, click View Logs for the selected import.
Schedule the Import
You can schedule an import for a specific time and you can specify the interval for the import (for example, daily or weekly). You can schedule multiple imports to process simultaneously.
Follow these steps:
  1. On the Administration tab, Data Importer page, in the Schedule area for a selected import, select the Is Scheduled check box.
  2. Provide the information for the schedule. The following fields require explanation:
    • Run Time
      Specifies the time of the day, in 24-hour format, to process the import. When you schedule imports, use the local time zone on the CA APM application server.
    • Interval Day
      Specifies the day during the Interval Type to process the import. For example, if the Interval Type is Month and the Interval Day is 1, the import is processed on the first day of the month.
    • Data File
      Specifies a data file name other than the default (from the Basic Information) if you want to use a different file.
      If this file is available on the application server, you can search and select the file. If this file is not available on the application server, you can locate and upload the file.
    • Upload Data File
      Browse for the source data file that you want to import. This file is uploaded to the application server.
    • First Run Date
      Specifies the date when the first import starts to process.
    • Interval Type
      Specifies the type of interval for the import (for example, Day, Month, Quarter, Week, or Year).
    • Interval
      Specifies how often the import processes. This interval is based on the specified Interval Type. For example, if the Interval Type is Weekly and the Interval is 2, the import processes every two weeks.
    • Last Day of Interval
      Specifies that the import processes on the last day of the selected Interval Type. If you select this check box, any previous value that you added to the Interval Day field is removed, and the Interval Day field is disabled.
  3. Click Submit.
    The data import is scheduled for the specified date and time.
Examples: Using the Schedule Settings
The following examples illustrate the use of the schedule settings.
  • Select Day for Interval Type and 2 for Interval. The import processes every other day.
  • Select Week for Interval Type, 1 for Interval Day, and 3 for Interval. The import processes every three weeks on the first day of the week (Sunday).
  • Select Month for Interval Type, 15 for Interval Day, and 2 for Interval. The import processes every two months on the 15th day of the month.
  • Select Quarter for Interval Type and select Last Day of Interval. The import processes every quarter (every three months) on the last day of the last month in the quarter.
  • Select Year for Interval Type, 1 for Interval Day, and 1 for Interval. The import processes on January 1 of every year.
To view the import jobs for your current selected import, click Associated Jobs on the left side of the page. To view all import jobs for all imports, click Import Jobs on the left side of the page. In the list of import jobs that appears, click Status Message to view the status of an import.
You can also view the log file for more information about the import activity. In the list of import jobs, click View Logs for the selected import.
View the Schedule Details
You can view the schedule details for a scheduled import job that you created.
First, open the list of import jobs.
  • To view the scheduled import jobs for your currently selected import, click Associated Jobs on the left side of the page, select the Scheduled check box, and click Go.
  • To view all import jobs for all imports, click Import Jobs on the left side of the page, select the Scheduled check box, and click Go.
In the list of import jobs that appears, click Schedule Details for a selected import.
View the Import Log Files
You can view the Data Importer log files to see the details of all CA-provided and user-defined imports that have completed. The Data Importer creates a log file for each import that you run, including imports that were submitted immediately or scheduled for a future time. All import activities are saved in the log files.
To view the log files, first open the list of import jobs.
  • To view the import jobs for your current selected import, click Associated Jobs on the left side of the page.
  • To view all import jobs for all imports, click Import Jobs.
In the list of import jobs, click View Logs for a selected import. If more than one log file is available (for example, for a scheduled import that has completed a few times already), all files are listed with their corresponding creation dates.
You can view any available LDAP Import Sync log file. If you click Start LDAP Data Import and Sync on the LDAP Data Import and Sync page (Administration, User/Role Management), an import job ID is displayed. Use this job ID to locate the job in the Data Importer list of import jobs. Then click View Logs for that job.
You can also locate and view the import log files in the following location on the CA APM application server:
[ITAM Root Path]\Storage\Common Store\Import\Logs
Review the Import Log File - Best Practices
The Data Importer log file contains information and error messages regarding the processing of import jobs. To help you understand the results of your import and to troubleshoot any errors, use the information in this log file. This section contains some recommended best practices for working with the Data Importer log file.
Match the row number in the data file with the error message in the log file.
A log file error message identifies the corresponding row number from your data file. You can also find the data file row number in the row above or below the error message in the log file.
Sometimes the error message in the log file does not show the data file row number. In this situation, the actual data file values are shown immediately after the error message in the log file.
Count the number of error messages in your log file.
  1. Search for the following phrases in your log file to find the error messages in the file. These phrases are included with the error messages.
    Web Service threw exception Error at record
  2. After you find a type of error message, search for that error in the log file and count the number of occurrences.
  3. Identify and search for more error types that appear in your log file and count the number of occurrences.
  4. Compare the count of all errors in your log file with the statistics that the Data Importer generated for the associated import. To view these statistics, click Status Message on the Associated Jobs list or Import Jobs list. This comparison helps you account for all relevant errors and identify error messages that are not valid and can be ignored.
Verify the Imported Data
You verify that your data import succeeded by viewing your data in CA APM and by reviewing the Data Importer statistics.
  • Review the Data Importer Statistics.
    To review the statistics for your current selected import, click Associated Jobs on the left side of the page. In the list of import jobs that appears, click Status Message for your import.
    You can also view the log file for more information about the import activity. In the list of import jobs, click View Logs for the selected import.
  • View the Imported Data in CA APM.
    To view the imported data, navigate to the tab and subtab, if necessary, for the object that you imported (for example, asset, company, or contact). Search for the objects that you imported and verify that the objects are available.
Example: Verify the Data Import of New Employees
After Sam runs the import, he performs the following steps to verify the data import of new employees:
  1. Checks the import statistics.
    • Clicks Associated Jobs or Import Jobs on the left side of the Data Importer page.
    • Clicks Status Message for the import and reviews the statistics.
  2. Views the import log file and the user interface.
    • Clicks View Logs in the list of import jobs and reviews the contents of the log file.
    • Navigates to Directory, Contact on the CA APM user interface. Searches for the new employees. Verifies that the non-IT employees and employees who do not work at company headquarters are not available.