Question:
I am using ZappySys Redshift Data Transfer Task. We has some issue with cluster so once files uploaded to S3 our copy command failed due to wrong table name. We want to issue COPY command again with correct name and avoid long process of generating data files / compressing and uploading. Is there any easy way to do this?
Answer:
We do not offer direct option to restart process if it fails but there is an easy way to try workaround to save time. This work around only possible if you have [Do not delete target stage files on failure] option checked.
First thing is look at the log and grab your actual COPY command issued by Data Transfer Task.
Your cleaned copy command may look like this.
COPY "public"."csv1" FROM 's3://YOUR-BUCKET-NAME/file_D-IzlFLpWUekm23Kwnu7cA_.manifest' credentials 'aws_access_key_id=xxxxxxx;aws_secret_access_key=yyyyyyy' GZIP DELIMITER ',' DATEFORMAT 'YYYY-MM-DD HH24:MI:SS' TIMEFORMAT 'YYYY-MM-DD HH24:MI:SS' IGNOREHEADER 1 REMOVEQUOTES MANIFEST ESCAPE REGION 'us-east-1'
Replace xxxxxxx and yyyyyy with correct Access Key and Secret key used to connect to S3 Bucket.
Now use any tool to execute COPY command to finish loading of S3 files into Redshift Table.
You can also use Redshift Execute SQL Task to fire COPY command.
Once you done with Load you can delete leftover files.
Contact Us
If you have more question(s) feel free to contact us via Live chat or email to support@zappysys.com
Comments
0 comments
Please sign in to leave a comment.