COPY INTO
Applies to: Databricks SQL Databricks Runtime
Loads data from a file location into a Delta table. This is a retryable and idempotent operation — Files in the source location that have already been loaded are skipped. This is true even if the files have been modified since they were loaded. For examples, see Common data loading patterns using COPY INTO.
Syntax
COPY INTO target_table [ BY POSITION | ( col_name [ , <col_name> ... ] ) ]
FROM { source_clause |
( SELECT expression_list FROM source_clause ) }
FILEFORMAT = data_source
[ VALIDATE [ ALL | num_rows ROWS ] ]
[ FILES = ( file_name [, ...] ) | PATTERN = glob_pattern ]
[ FORMAT_OPTIONS ( { data_source_reader_option = value } [, ...] ) ]
[ COPY_OPTIONS ( { copy_option = value } [, ...] ) ]
source_clause
source [ WITH ( [ CREDENTIAL { credential_name |
(temporary_credential_options) } ]
[ ENCRYPTION (encryption_options) ] ) ]
Parameters
target_table
Identifies an existing Delta table. The target_table must not include a temporal specification.
BY POSITION
| ( col_name [ , <col_name> … ] )Matches source columns to target table columns by ordinal position. Type casting of the matched columns is done automatically.
This parameter is only supported for headerless CSV file format. You must specify
FILEFORMAT = CSV
.FORMAT_OPTIONS
must also be set to("headers" = "false")
(FORMAT_OPTIONS ("headers" = "false")
is the default).Syntax option 1:
BY POSITION
Matches source columns to target table columns by ordinal position automatically.
The default name matching is not used for matching.
IDENTITY
columns andGENERATED
columns of the target table are ignored when matching the source columns.If the number of source columns doesn’t equal the filtered target table columns,
COPY INTO
raises an error.
Syntax option 2:
( col_name [ , <col_name> ... ] )
Matches source columns to the specified target table columns by relative ordinal position using a target table column name list in parentheses, separated by comma.
The original table column order and column names are not used for matching.
IDENTITY
columns andGENERATED
columns cannot be specified in the column name list, otherwiseCOPY INTO
raises an error.The specified columns cannot be duplicated.
When the number of source columns doesn’t equal the specified table columns,
COPY INTO
raises an error.For the columns that are not specified in the column name list,
COPY INTO
assigns default values, if any, and assignsNULL
otherwise. If any column is not nullable,COPY INTO
raises an error.
source
The file location to load the data from. Files in this location must have the format specified in
FILEFORMAT
. The location is provided in the form of a URI.Access to the source location can be provided through:
credential_name
Optional name of the credential used to access or write to the storage location. You use this credential only if the file location is not included in an external location. See credential_name.
Inline temporary credentials.
Note
If the source file path is a root path, please add a slash (
/
) at the end of the file path, for example,s3://my-bucket/
.Accepted credential options are:
AWS_ACCESS_KEY
,AWS_SECRET_KEY
, andAWS_SESSION_TOKEN
for AWS S3AZURE_SAS_TOKEN
for ADLS Gen2 and Azure Blob Storage
Accepted encryption options are:
TYPE = 'AWS_SSE_C'
, andMASTER_KEY
for AWS S3
SELECT expression_list
Selects the specified columns or expressions from the source data before copying into the Delta table. The expressions can be anything you use with
SELECT
statements, including window operations. You can use aggregation expressions only for global aggregates–you cannotGROUP BY
on columns with this syntax.FILEFORMAT = data_source
The format of the source files to load. One of
CSV
,JSON
,AVRO
,ORC
,PARQUET
,TEXT
,BINARYFILE
.VALIDATE
Applies to: Databricks SQL Databricks Runtime 10.4 LTS and above
The data that is to be loaded into a table is validated but not written to the table. These validations include:
Whether the data can be parsed.
Whether the schema matches that of the table or if the schema needs to be evolved.
Whether all nullability and check constraints are met.
The default is to validate all of the data that is to be loaded. You can provide a number of rows to be validated with the
ROWS
keyword, such asVALIDATE 15 ROWS
. TheCOPY INTO
statement returns a preview of the data of 50 rows or less when a number of less than 50 is used with theROWS
keyword).FILES
A list of file names to load, with a limit of 1000 files. Cannot be specified with
PATTERN
.PATTERN
A glob pattern that identifies the files to load from the source directory. Cannot be specified with
FILES
.Pattern
Description
?
Matches any single character
*
Matches zero or more characters
[abc]
Matches a single character from character set {a,b,c}.
[a-z]
Matches a single character from the character range {a…z}.
[^a]
Matches a single character that is not from character set or range {a}. Note that the
^
character must occur immediately to the right of the opening bracket.{ab,cd}
Matches a string from the string set {ab, cd}.
{ab,c{de, fh}}
Matches a string from the string set {ab, cde, cfh}.
FORMAT_OPTIONS
Options to be passed to the Apache Spark data source reader for the specified format. See Format options for each file format.
COPY_OPTIONS
Options to control the operation of the
COPY INTO
command.force
: boolean, defaultfalse
. If set totrue
, idempotency is disabled and files are loaded regardless of whether they’ve been loaded before.mergeSchema
: boolean, defaultfalse
. If set totrue
, the schema can be evolved according to the incoming data.
Invoke COPY INTO
concurrently
COPY INTO
supports concurrent invocations against the same table. As long as COPY INTO
is invoked concurrently on distinct sets of input files, each invocation should eventually succeed, otherwise you get a transaction conflict. COPY INTO
should not be invoked concurrently to improve performance; a single COPY INTO
command with multiple files typically performs better than running concurrent COPY INTO
commands with a single file each. COPY INTO
can be called concurrently when:
Multiple data producers don’t have an easy way to coordinate and cannot make a single invocation.
When a very large directory can be ingested sub-directory by sub-directory. When ingesting directories with a very large number of files, Databricks recommends using Auto Loader when possible.
Access file metadata
To learn how to access metadata for file-based data sources, see File metadata column.
Format options
Generic options
The following options apply to all file formats.
Option |
---|
Type: Whether to ignore corrupt files. If true, the Spark jobs will continue
to run when encountering corrupted files and the contents that have been
read will still be returned. Observable as Default value: |
Type: Whether to ignore missing files. If true, the Spark jobs will continue to run when encountering missing files and the contents that have been read will still be returned. Available in Databricks Runtime 11.3 LTS and above. Default value: |
Type: An optional timestamp to ingest files that have a modification timestamp after the provided timestamp. Default value: None |
Type: An optional timestamp to ingest files that have a modification timestamp before the provided timestamp. Default value: None |
Type: A potential glob pattern to provide for choosing files. Equivalent to
Default value: None |
Type: Whether to skip partition inference during schema inference. This does not affect which files are loaded. Default value: |
JSON
options
Option |
---|
Type: Whether to allow backslashes to escape any character that succeeds it. If not enabled, only characters that are explicitly listed by the JSON specification can be escaped. Default value: |
Type: Whether to allow the use of Java, C, and C++ style comments
( Default value: |
Type: Whether to allow the set of not-a-number ( Default value: |
Type: Whether to allow integral numbers to start with additional
(ignorable) zeroes (for example, Default value: |
Type: Whether to allow use of single quotes (apostrophe,
character Default value: |
Type: Whether to allow JSON strings to contain unescaped control characters (ASCII characters with value less than 32, including tab and line feed characters) or not. Default value: |
Type: Whether to allow use of unquoted field names (which are allowed by JavaScript, but not by the JSON specification). Default value: |
Type: The path to store files for recording the information about bad JSON records. Default value: None |
Type: The column for storing records that are malformed and cannot be parsed.
If the Default value: |
Type: The format for parsing date strings. Default value: |
Type: Whether to ignore columns of all null values or empty arrays and structs during schema inference. Default value: |
Type: The name of the encoding of the JSON files. See Default value: |
Type: Whether to try and infer timestamp strings as a Default value: |
Type: A string between two consecutive JSON records. Default value: None, which covers |
Type: A Default value: |
Type: Parser mode around handling malformed records. One of Default value: |
Type: Whether the JSON records span multiple lines. Default value: |
Type: Attempts to infer strings as Default value: |
Type: Whether to infer primitive types like numbers and booleans as Default value: |
Type: Specifies the case sensitivity behavior when Default value: |
Type: Whether to collect all data that can’t be parsed due to a data type mismatch or schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader. For more details, refer to What is the rescued data column?. Default value: None |
Type: Whether to ingest the entire JSON document, parsed into a single Variant column with the given string as the column’s name. If disabled, the JSON fields will be ingested into their own columns. Default value: None |
Type: The format for parsing timestamp strings. Default value: |
Type: The Default value: None |
CSV
options
Option |
---|
Type: The path to store files for recording the information about bad CSV records. Default value: None |
Type: The character used to escape the character used for escaping quotes.
For example, for the following record:
Default value: |
Note Supported for Auto Loader. Not supported for Type: The column for storing records that are malformed and cannot be parsed.
If the Default value: |
Type: Defines the character that represents a line comment when found in the
beginning of a line of text. Use Default value: |
Type: The format for parsing date strings. Default value: |
Type: String representation of an empty value. Default value: |
Type: The name of the encoding of the CSV files. See Default value: |
Type: Whether to forcibly apply the specified or inferred schema to the CSV files. If the option is enabled, headers of CSV files are ignored. This option is ignored by default when using Auto Loader to rescue data and allow schema evolution. Default value: |
Type: The escape character to use when parsing the data. Default value: |
Type: Whether the CSV files contain a header. Auto Loader assumes that files have headers when inferring the schema. Default value: |
Type: Whether to ignore leading whitespaces for each parsed value. Default value: |
Type: Whether to ignore trailing whitespaces for each parsed value. Default value: |
Type: Whether to infer the data types of the parsed CSV records or to assume all
columns are of Default value: |
Type: A string between two consecutive CSV records. Default value: None, which covers |
Type: A Default value: |
Type: Maximum number of characters expected from a value to parse. Can be used to
avoid memory errors. Defaults to Default value: |
Type: The hard limit of how many columns a record can have. Default value: |
Type: Whether to infer the schema across multiple files and to merge the schema of each file. Enabled by default for Auto Loader when inferring the schema. Default value: |
Type: Parser mode around handling malformed records. One of Default value: |
Type: Whether the CSV records span multiple lines. Default value: |
Type: The string representation of a non-a-number value when parsing Default value: |
Type: The string representation of negative infinity when parsing Default value: |
Type: String representation of a null value. Default value: |
Type: While reading files, whether to align columns declared in the header with the
schema case sensitively. This is Default value: |
Type: The string representation of positive infinity when parsing Default value: |
Type: Attempts to infer strings as dates instead of timestamp when possible. You
must also use schema inference, either by enabling Default value: |
Type: The character used for escaping values where the field delimiter is part of the value. Default value: |
Type: Specifies the case sensitivity behavior when Default value: |
Type: Whether to collect all data that can’t be parsed due to: a data type mismatch, and schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader. For more details refer to What is the rescued data column?. Default value: None |
Type: The separator string between columns. Default value: |
Type: The number of rows from the beginning of the CSV file that should be ignored
(including commented and empty rows). If Default value: |
Type: The format for parsing timestamp strings. Default value: |
Type: The Default value: None |
Type: The strategy for handling unescaped quotes. Allowed options:
Default value: |
XML
options
Option |
Description |
Scope |
---|---|---|
|
The row tag of the XML files to treat as a row. In the example
XML |
read |
|
Defines a fraction of rows used for schema inference. XML built-in functions
ignore this option. Default: |
read |
|
Whether to exclude attributes in elements. Default: |
read |
|
Mode for dealing with corrupt records during parsing.
|
read |
|
If |
read |
|
Allows renaming the new field that contains a malformed string created by
|
read |
|
The prefix for attributes to differentiate attributes from elements. This
will be the prefix for field names. Default is |
read, write |
|
The tag used for the character data within elements that also have
attribute(s) or child element(s) elements. User can specify the |
read,write |
|
For reading, decodes the XML files by the given encoding type. For writing,
specifies encoding (charset) of saved XML files. XML built-in functions
ignore this option. Default: |
read, write |
|
Defines whether surrounding white spaces from values being read should be
skipped. Default: |
read |
|
Path to an optional XSD file that is used to validate the XML for each row individually. Rows that fail to validate are treated like parse errors as above. The XSD does not otherwise affect the schema provided, or inferred. |
read |
|
If |
read |
|
Custom timestamp format string that follows the
datetime pattern format.
This applies to |
read, write |
|
Custom format string for timestamp without timezone that follows the
datetime pattern format. This applies to TimestampNTZType type. Default:
|
read, write |
|
Custom date format string that follows the
datetime pattern
format. This applies to date type. Default: |
read, write |
|
Sets a locale as a language tag in IETF BCP 47 format. For instance, |
read |
|
Root tag of the XML files. For example, in |
write |
|
Content of XML declaration to write at the start of every output XML file,
before the |
write |
|
Name of XML element that encloses each element of an array-valued column
when writing. Default: |
write |
|
Sets the string representation of a null value. Default: string |
read, write |
|
Compression code to use when saving to file. This can be one of the known
case-insensitive shortened names ( |
write |
|
If true, throws an error on XML element name validation failure. For example,
SQL field names can have spaces, but XML element names cannot. Default:
|
write |
|
Specifies the case sensitivity behavior when rescuedDataColumn is enabled.
If true, rescue the data columns whose names differ by case from the schema;
otherwise, read the data in a case-insensitive manner. Default: |
read |
|
Whether to collect all data that can’t be parsed due to a data type mismatch and schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader. For more details, see What is the rescued data column?. Default: None. |
read |
PARQUET
options
Option |
---|
Type: Controls the rebasing of the DATE and TIMESTAMP values between Julian and
Proleptic Gregorian calendars. Allowed values: Default value: |
Type: Controls the rebasing of the INT96 timestamp values between Julian and
Proleptic Gregorian calendars. Allowed values: Default value: |
Type: Whether to infer the schema across multiple files and to merge the schema of each file. Default value: |
Type: Specifies the case sensitivity behavior when Default value: |
Type: Whether to collect all data that can’t be parsed due to: a data type mismatch, and schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader. For more details refer to What is the rescued data column?. Default value: None |
AVRO
options
Option |
---|
Type: Optional schema provided by a user in Avro format. When reading Avro, this option can be set to an evolved schema, which is compatible but different with the actual Avro schema. The deserialization schema will be consistent with the evolved schema. For example, if you set an evolved schema containing one additional column with a default value, the read result will contain the new column too. Default value: None |
Type: Controls the rebasing of the DATE and TIMESTAMP values between Julian and
Proleptic Gregorian calendars. Allowed values: Default value: |
Type: Whether to infer the schema across multiple files and to merge the schema
of each file.
Default value: |
Type: Specifies the case sensitivity behavior when Default value: |
Type: Whether to collect all data that can’t be parsed due to: a data type mismatch, and schema mismatch (including column casing) to a separate column. This column is included by default when using Auto Loader. For more details refer to What is the rescued data column?. Default value: None |
BINARYFILE
options
Binary files do not have any additional configuration options.
TEXT
options
Option |
---|
Type: The name of the encoding of the TEXT files. See Default value: |
Type: A string between two consecutive TEXT records. Default value: None, which covers |
Type: Whether to read a file as a single record. Default value: |
ORC
options
Option |
---|
Type: Whether to infer the schema across multiple files and to merge the schema of each file. Default value: |