[[ecs-converting]] === Map custom data to ECS A common schema helps you correlate and use data from various sources. Fields for most Elastic modules and solutions (version 7.0 and later) are mapped to the Elastic Common Schema. You may want to map data from other implementations to ECS to help you correlate data across all of your products and solutions. [float] [[ecs-converting-before-you-start]] ==== Before you start Before you start a conversion, be sure that you understand the basics below. [float] [[core-or-ext]] ===== Core and extended levels Make sure you understand the distinction between Core and Extended fields, as explained in the <>. Core and Extended fields are documented in the <> or, for a single page representation of all fields, please see the {ecs_github_repo_link}/generated/csv/fields.csv[generated CSV of fields]. [float] [[ecs-conv]] ===== An approach to mapping an existing implementation Here's the recommended approach for converting an existing implementation to {ecs}. . Review each field in your original event and map it to the relevant ECS field. - Start by mapping your field to the relevant ECS Core field. - If a relevant ECS Core field does not exist, map your field to the relevant ECS Extended field. - If no relevant ECS Extended field exists, consider keeping your field with its original details, or possibly renaming it using ECS naming guidelines and attempt to map one or more of your original event fields to it. . Review each ECS Core field, and attempt to populate it. - Review your original event data again - Consider populating the field based on additional meta-data such as static information (e.g. add `event.category:authentication` even if your auth events don't mention the word "authentication") - Consider capturing additional environment meta-data, such as information about the host, container or cloud instance. . Review other extended fields from any field set you are already using, and attempt to populate it as well. . Set `ecs.version` to the version of the schema you are conforming to. This will allow you to upgrade your sources, pipelines and content (like dashboards) smoothly in the future. [float] [[ecs-conv-spreadsheet]] ===== Use a spreadsheet to plan your migration Using a spreadsheet to plan the migration from pre-existing source fields to ECS is a common practice. It's a good way to address each of your fields methodically among colleagues. To help you plan your migration, Elastic offers a https://ela.st/sample-pipeline-mapping[spreadsheet template]. You can use a CSV version of this spreadsheet to <>. This is the easiest and most consistent way to map your custom data to ECS, regardless of your ingest method. [float] [[ecs-map-custom-data-to-ecs-es-pipeline]] ==== Map custom data to ECS using an {es} ingest pipeline Use {kib}'s **Create pipeline from CSV** feature to create an {es} ingest pipeline from a CSV file that maps custom data to ECS fields. Before you start, ensure you meet the {ref}/ingest.html#ingest-prerequisites[prerequisites] to create ingest pipelines in {kib}. . Download or make a copy of the https://ela.st/sample-pipeline-mapping[spreadsheet template]. . Use the spreadsheet to map your custom data to ECS fields. While you can include additional columns, {kib} only processes the following supported columns. Other columns are ignored. + .**Supported columns** [role="child_attributes"] [%collapsible] ==== `source_field`:: (Required) JSON field key from your custom data. Supports dot notation. Rows with an empty `source_field` are skipped. `destination_field`:: (Required) ECS field name. Supports dot notation. To perform a `format_action` without renaming the field, leave `destination_field` empty. + If the `destination field` is `@timestamp`, a `format_action` of `parse_timestamp` and a `timestamp_format` of `UNIX_MS` are used, regardless of any provided values. This helps prevent downstream conversion problems. `format_action`:: (Optional) Conversion to apply to the field value. + [%collapsible%open] .Valid values ===== (empty):: No conversion. `parse_timestamp`:: Formats a date or time value. To specify a format, use `timestamp_format`. `to_array`:: Converts to an array. `to_boolean`:: Converts to a boolean. `to_float`:: Converts to a floating point number. `to_integer`:: Converts to an integer `to_string`:: Converts to a string. `lowercase`:: Converts to lowercase. `uppercase`:: Converts to uppercase. ===== `timestamp_format`:: (Optional) Time and date format to use with the `parse_timestamp` format action. Valid values are `UNIX`, `UNIX_MS`, `ISO8601`, `TAI64N`, and {ref}/mapping-date-format.html[Java time patterns]. Defaults to `UNIX_MS`. `copy_action`:: (Optional) Action to take on the `source_field`. Valid values are: + [%collapsible%open] .Valid values ===== (empty):: (Default) Uses the default action. You'll specify the default action later on {kib}'s **Create pipeline from CSV** page. `copy`:: Makes a copy of the `source_field` to use as the `destination_field`. The final document contains both fields. + `rename`:: Renames the `source_field` to the `destination_field`. The final document only contains the `destination_field`. ===== ==== . Save and export your spreadsheet as a CSV file. + NOTE: {kib}'s **Create pipeline from CSV** feature only supports CSV files up to 100 MB. . In {kib}, open the main menu and click **Stack Management > Ingest Pipelines > Create pipeline > New pipeline from CSV**. + [role="screenshot"] image::images/kib-create-pipeline-from-csv.png[Create Pipeline from CSV in Kibana,align="center"] . On the **Create pipeline from CSV** page, upload your CSV file. . Under **Default action**, select the **Copy field name** or **Rename field** option. + For the **Copy field name** option, {kib} makes a copy of the `source_field` to use as the `destination_field` by default. The final document contains both fields. + For the **Rename field** option, renames the `source_field` to the `destination_field` by default. The final document only contains the `destination_field`. + You can override this default using the `copy_action` column of your CSV. . Click **Process CSV**. + {kib} displays a JSON preview of the ingest pipeline generated from your CSV file. + [role="screenshot"] image::images/kib-create-pipeline-from-csv-preview.png[Preview pipeline from CSV in Kibana,align="center"] . To create the pipeline, click **Continue to create pipeline**. . On the **Create pipeline** page, you can add additional ingest processors to your pipeline. + When you're done, click **Create pipeline**.