Categories:

DML Commands - Data Loading

PUT¶

Uploads (i.east. stages) information files from a local directory/folder on a client machine to 1 of the following Snowflake stages:

  • Named internal phase.

  • Internal stage for a specified table.

  • Internal stage for the current user.

Once files are staged, the information in the files can be loaded into a table using the COPY INTO <tabular array> command.

Notation

  • PUT does not back up uploading files to external stages. To upload files to external stages, apply the utilities provided by the cloud service.

  • The following Snowflake clients do not support PUT:

    • .NET Driver

  • The ODBC driver supports PUT with Snowflake accounts hosted on the post-obit platforms:

    • Amazon Spider web Services (using ODBC Driver Version two.17.5 and higher).

    • Google Deject Platform (using ODBC Driver Version two.21.5 and college).

    • Microsoft Azure (using ODBC Driver Version 2.20.2 and higher).

See also:

Become , LIST , REMOVE

Syntax¶

                                PUT                file                ://                <path_to_file>                /                <filename>                internalStage                [                PARALLEL                =                <integer>                ]                [                AUTO_COMPRESS                =                True                |                FALSE                ]                [                SOURCE_COMPRESSION                =                AUTO_DETECT                |                GZIP                |                BZ2                |                BROTLI                |                ZSTD                |                DEFLATE                |                RAW_DEFLATE                |                NONE                ]                [                OVERWRITE                =                Truthful                |                False                ]              

Where:

                                    internalStage                  ::=                  @[                  <namespace>                  .]                  <int_stage_name>                  [/                  <path>                  ]                  |                  @[                  <namespace>                  .]%                  <table_name>                  [/                  <path>                  ]                  |                  @~[/                  <path>                  ]                

Required Parameters¶

file:// path_to_file / filename

Specifies the URI for the data file(s) on the client machine, where:

  • path_to_file is the local directory path to the file(southward) to upload. If the files are located in the root directory (or sub-directory) on the client car:

    Linux/Mac

    You must include the initial forrad slash in the path (eastward.m. file:///tmp/load ).

    Windows

    You must include the drive and backslash in the path (eastward.g. file://C:\temp\load ).

  • filename is the name of the file(s) to upload. Wildcard characters ( * , ? ) are supported to enable uploading multiple files in a directory.

The URI tin can exist enclosed in single quotes, which allows special characters, including spaces, in directory and file names; however, the bulldoze and path separator is a forward slash ( / ) for all supported operating systems (e.g. 'file://C:/temp/load information' for a path in Windows containing a directory named load data ).

internalStage

Specifies the location in Snowflake where to upload the files:

@[ namespace .] int_stage_name [/ path ]

Files are uploaded to the specified named internal stage.

@[ namespace .]% table_name [/ path ]

Files are uploaded to the stage for the specified table.

@~[/ path ]

Files are uploaded to the stage for the current user.

Where:

  • namespace is the database and/or schema in which the named internal phase or table resides. It is optional if a database and schema are currently in utilize within the session; otherwise, it is required.

  • path is an optional case-sensitive path for files in the cloud storage location (i.e. files have names that begin with a mutual string) that limits access to a ready of files. Paths are alternatively called prefixes or folders by different cloud storage services.

Note

If the stage name or path includes spaces or special characters, it must be enclosed in single quotes (e.g. '@"my stage"' for a stage named "my phase" ).

Optional Parameters¶

PARALLEL = integer

Specifies the number of threads to use for uploading files. The upload process separate batches of data files by size:

  • Small files (< 64 MB compressed or uncompressed) are staged in parallel every bit individual files.

  • Larger files are automatically split into chunks, staged concurrently, and reassembled in the target phase. A single thread can upload multiple chunks.

Increasing the number of threads tin improve performance when uploading big files.

Supported values: Whatsoever integer value from i (no parallelism) to 99 (use 99 threads for uploading files).

Default: 4

Note

A 16 MB (rather than 64 MB) limit applies to older versions of Snowflake drivers, including:

  • JDBC Driver versions prior to 3.12.1.

  • ODBC Commuter versions prior to 2.20.five.

  • Python Connector versions prior to 2.ii.0.

AUTO_COMPRESS = TRUE | Fake

Specifies whether Snowflake uses gzip to compress files during upload:

  • Truthful : Files are compressed (if they are not already compressed).

  • False : Files are non compressed (i.e. files are uploaded equally-is).

This choice does not support other compression types. To employ a different compression type, compress the file separately earlier executing the PUT command. And then, identify the compression type using the SOURCE_COMPRESSION option.

Ensure your local folder has sufficient space for Snowflake to compress the information files before staging them. If necessary, fix the TEMP , TMPDIR or TMP environment variable in your operating system to point to a local folder that contains boosted costless space.

Default: TRUE

SOURCE_COMPRESSION = AUTO_DETECT | GZIP | BZ2 | BROTLI | ZSTD | DEFLATE | RAW_DEFLATE | NONE

Specifies the method of pinch used on already-compressed files that are beingness staged:

Supported Values

Notes

AUTO_DETECT

Pinch algorithm detected automatically, except for Brotli-compressed files, which cannot currently be detected automatically. If loading Brotli-compressed files, explicitly use BROTLI instead of AUTO_DETECT .

GZIP

BZ2

BROTLI

Must be used if loading Brotli-compressed files.

ZSTD

Zstandard v0.8 (and higher) supported.

DEFLATE

Deflate-compressed files (with zlib header, RFC1950).

RAW_DEFLATE

Raw Deflate-compressed files (without header, RFC1951).

NONE

Data files to load have not been compressed.

Default: AUTO_DETECT

Notation

Snowflake uses this option to detect how the data files were compressed so that they can be uncompressed and the data extracted for loading; information technology does not employ this option to compress the files.

Uploading files that were compressed with other utilities (e.m. lzip, lzma, lzop, and xz) is not currently supported.

OVERWRITE = True | FALSE

Specifies whether Snowflake overwrites an existing file with the aforementioned name during upload:

  • TRUE : An existing file with the same name is overwritten.

  • FALSE : An existing file with the same name is not overwritten.

    Note that a List operation on the stage is performed in the groundwork, which tin affect the performance of the PUT operation.

    If attempts to PUT a file fail because a file with the same proper name exists in the target phase, the post-obit options are bachelor:

    • Load the data from the existing file into one or more tables, and remove the file from the phase. Then PUT a file with new or updated information to the phase.

    • Rename the local file, and and then try the PUT performance again.

    • Set OVERWRITE = TRUE in the PUT argument. Practise this only if information technology is actually safe to overwrite a file with information that might not yet have been loaded into Snowflake.

Note that if your Snowflake account is hosted on Google Cloud Platform, PUT statements do not recognize when the OVERWRITE parameter is ready to TRUE. A PUT performance ever overwrites any existing files in the target phase with the local files yous are uploading.

The post-obit clients back up the OVERWRITE pick for Snowflake accounts hosted on Amazon Web Services or Microsoft Azure:

  • SnowSQL

  • Snowflake ODBC Driver

  • Snowflake JDBC Driver

  • Snowflake Connector for Python

Supported values: True, FALSE.

Default: Faux .

Usage Notes¶

  • The control cannot be executed from the Worksheets Worksheet tab folio in the Snowflake web interface; instead, apply the SnowSQL client to upload information files, or bank check the documentation for a specific Snowflake client to verify back up for this control.

  • File-globbing patterns (i.e. wildcards) are supported.

  • The command does not create or rename files.

  • Uploaded files are automatically encrypted with 128-scrap or 256-bit keys. The CLIENT_ENCRYPTION_KEY_SIZE business relationship parameter specifies the size key used to encrypt the files.

  • The command ignores whatever duplicate files you try to upload to the same phase. A duplicate file is an unmodified file with the same name as an already-staged file.

    To overwrite an already-staged file, y'all must modify the file you are uploading so that its contents are different from the staged file, which results in a new checksum for the newly-staged file.

Tip

For security reasons, the command times out after a set period of time. This can occur when loading large, uncompressed data files. To avoid timeout problems, we recommend compressing large information files using one of the supported compression types before uploading the files. Then, specify the pinch type for the files using the SOURCE_COMPRESSION choice.

You can also consider increasing the value of the PARALLEL option, which tin assistance with performance when uploading large data files.

Furthermore, to take advantage of parallel operations when loading data into tables (using the Copy INTO <table> command), we recommend using information files ranging in size from roughly 100 to 250 MB compressed. If your data files are larger, consider using a tertiary-political party tool to split them into smaller files before compressing and uploading them.

Examples¶

Upload a file named mydata.csv in the /tmp/data directory (in a Linux or macOS environment) to an internal stage named my_int_stage :

                                    PUT                  file                  :///                  tmp                  /                  data                  /                  mydata                  .                  csv                  @                  my_int_stage                  ;                

Upload a file named orders_001.csv in the /tmp/information directory (in a Linux or macOS environs) to the stage for the orderstiny_ext tabular array, with automatic information pinch disabled:

                                    PUT                  file                  :///                  tmp                  /                  information                  /                  orders_001                  .                  csv                  @%                  orderstiny_ext                  AUTO_COMPRESS                  =                  Faux                  ;                

Same example as higher up, just using wildcard characters in the filename to upload multiple files:

                                    PUT                  file                  :///                  tmp                  /                  data                  /                  orders_                  *                  01                  .                  csv                  @%                  orderstiny_ext                  AUTO_COMPRESS                  =                  FALSE                  ;                

Upload a file named mydata.csv in the C:\temp\data directory (in a Windows environment) to the phase for the electric current user, with automatic information compression enabled:

                                    PUT                  file                  ://                  C                  :\                  temp                  \                  data                  \                  mydata                  .                  csv                  @~                  AUTO_COMPRESS                  =                  TRUE                  ;