You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your reply. I switched from the pandas s3fs to aws wrangler because pd.to_json wasnt saving to s3.
The compression function seems to be broken on when you try to use the "lines" flag. I am reading from a sql database and want to save the output to s3 as a json line delimited. Maybe there is a better way to do this.
Pandas 1.2.0 is available so we added support to it in the PR above 👆 .
Could you give it a try before the official release? You can install that directly from the dev branch:
Describe the bug
Tested all of the compression types for json and none of them actually changed the file size.
To Reproduce
wr.s3.to_json( path='s3://{bucket}/file.bz2', df=df, compression='bz2', orient="records", lines=True, date_format='iso' )
I'll keep testing with different orient but it doesnt seem to pass the compressions along.
The text was updated successfully, but these errors were encountered: