# csv¶

CSV file type support

xarray_extras.csv.to_csv(x, path_or_buf, **kwargs)

Print DataArray to CSV.

When x has numpy backend, this function is equivalent to:

x.to_pandas().to_csv(path_or_buf, **kwargs)


When x has dask backend, this function returns a dask delayed object which will write to the disk only when its .compute() method is invoked.

Formatting and optional compression are parallelised across all available CPUs, using one dask task per chunk on the first dimension. Chunks on other dimensions will be merged ahead of computation.

Parameters: x – xarray.DataArray with one or two dimensions path_or_buf – File path or file-like object kwargs – Passed verbatim to pandas.DataFrame.to_csv() or pandas.Series.to_csv()

Limitations

• When x has dask backend, path_or_buf must be a file path. Fancy URIs are not (yet) supported.
• When x has dask backend, compression=’zip’ is not supported. All other compression methods (gzip, bz2, xz) are supported.

Distributed

This function supports dask distributed, with the caveat that all workers must write to the same shared mountpoint and that the shared filesystem must strictly guarantee close-open coherency, meaning that one must be able to call write() and then close() on a file descriptor from one host and then immediately afterwards open() from another host and see the output from the first host. Note that, for performance reasons, most network filesystems do not enable this feature by default.

Alternatively, one may write to local mountpoints and then manually collect and concatenate the partial outputs.