Write to a local file

Parquet

Because Domino uses PyArrow to serialize and transport data, the query result is easily written to a local parquet file. You can also use pandas as shown in the CSV example.

redshift = DataSourceClient().get_datasource("redshift-test")

res = redshift.query("SELECT * FROM wines LIMIT 1000")

# to_parquet() accepts a path or file-like object
# the whole result is loaded and written once
res.to_parquet("./wines_1000.parquet")

CSV

Because serializing to a CSV is lossy, Domino recommends using the Pandas.to_csv API so you can leverage the multiple options that it provides.

redshift = DataSourceClient().get_datasource("redshift-test")

res = redshift.query("SELECT * FROM wines LIMIT 1000")

# See Pandas.to_csv documentation for all options
csv_options = {header: True, quotechar: "'"}

res.to_pandas().to_csv("./wines_1000.csv", **csv_options)